00:00:00.000 Started by upstream project "autotest-spdk-v24.09-vs-dpdk-v22.11" build number 205 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3707 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.162 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.163 The recommended git tool is: git 00:00:00.163 using credential 00000000-0000-0000-0000-000000000002 00:00:00.164 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.215 Fetching changes from the remote Git repository 00:00:00.219 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.267 Using shallow fetch with depth 1 00:00:00.267 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.267 > git --version # timeout=10 00:00:00.298 > git --version # 'git version 2.39.2' 00:00:00.298 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.311 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.311 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.759 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.771 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.783 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:08.783 > git config core.sparsecheckout # timeout=10 00:00:08.796 > git read-tree -mu HEAD # timeout=10 00:00:08.813 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:08.836 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:08.837 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:08.923 [Pipeline] Start of Pipeline 00:00:08.937 [Pipeline] library 00:00:08.939 Loading library shm_lib@master 00:00:08.939 Library shm_lib@master is cached. Copying from home. 00:00:08.956 [Pipeline] node 00:00:08.980 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.982 [Pipeline] { 00:00:08.992 [Pipeline] catchError 00:00:08.994 [Pipeline] { 00:00:09.004 [Pipeline] wrap 00:00:09.010 [Pipeline] { 00:00:09.017 [Pipeline] stage 00:00:09.019 [Pipeline] { (Prologue) 00:00:09.036 [Pipeline] echo 00:00:09.037 Node: VM-host-SM9 00:00:09.045 [Pipeline] cleanWs 00:00:09.055 [WS-CLEANUP] Deleting project workspace... 00:00:09.055 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.061 [WS-CLEANUP] done 00:00:09.296 [Pipeline] setCustomBuildProperty 00:00:09.363 [Pipeline] httpRequest 00:00:10.328 [Pipeline] echo 00:00:10.330 Sorcerer 10.211.164.20 is alive 00:00:10.339 [Pipeline] retry 00:00:10.340 [Pipeline] { 00:00:10.373 [Pipeline] httpRequest 00:00:10.377 HttpMethod: GET 00:00:10.378 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.378 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.397 Response Code: HTTP/1.1 200 OK 00:00:10.398 Success: Status code 200 is in the accepted range: 200,404 00:00:10.398 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.955 [Pipeline] } 00:00:16.976 [Pipeline] // retry 00:00:16.986 [Pipeline] sh 00:00:17.269 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.289 [Pipeline] httpRequest 00:00:17.771 [Pipeline] echo 00:00:17.773 Sorcerer 10.211.164.20 is alive 00:00:17.783 [Pipeline] retry 00:00:17.785 [Pipeline] { 00:00:17.799 [Pipeline] httpRequest 00:00:17.804 HttpMethod: GET 00:00:17.805 URL: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:17.805 Sending request to url: http://10.211.164.20/packages/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:00:17.829 Response Code: HTTP/1.1 200 OK 00:00:17.830 Success: Status code 200 is in the accepted range: 200,404 00:00:17.830 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:36.843 [Pipeline] } 00:01:36.857 [Pipeline] // retry 00:01:36.863 [Pipeline] sh 00:01:37.139 + tar --no-same-owner -xf spdk_b18e1bd6297ec2f89ab275de3193457af1c946df.tar.gz 00:01:39.684 [Pipeline] sh 00:01:39.962 + git -C spdk log --oneline -n5 00:01:39.962 b18e1bd62 version: v24.09.1-pre 00:01:39.962 19524ad45 version: v24.09 00:01:39.962 9756b40a3 dpdk: update submodule to include alarm_cancel fix 00:01:39.962 a808500d2 test/nvmf: disable nvmf_shutdown_tc4 on e810 00:01:39.962 3024272c6 bdev/nvme: take nvme_ctrlr.mutex when setting keys 00:01:39.979 [Pipeline] withCredentials 00:01:39.987 > git --version # timeout=10 00:01:40.001 > git --version # 'git version 2.39.2' 00:01:40.015 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:40.016 [Pipeline] { 00:01:40.024 [Pipeline] retry 00:01:40.026 [Pipeline] { 00:01:40.040 [Pipeline] sh 00:01:40.317 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:40.585 [Pipeline] } 00:01:40.600 [Pipeline] // retry 00:01:40.604 [Pipeline] } 00:01:40.617 [Pipeline] // withCredentials 00:01:40.626 [Pipeline] httpRequest 00:01:41.020 [Pipeline] echo 00:01:41.021 Sorcerer 10.211.164.20 is alive 00:01:41.029 [Pipeline] retry 00:01:41.031 [Pipeline] { 00:01:41.043 [Pipeline] httpRequest 00:01:41.047 HttpMethod: GET 00:01:41.048 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.048 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.049 Response Code: HTTP/1.1 200 OK 00:01:41.050 Success: Status code 200 is in the accepted range: 200,404 00:01:41.050 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:46.272 [Pipeline] } 00:01:46.290 [Pipeline] // retry 00:01:46.298 [Pipeline] sh 00:01:46.579 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:47.970 [Pipeline] sh 00:01:48.251 + git -C dpdk log --oneline -n5 00:01:48.251 caf0f5d395 version: 22.11.4 00:01:48.251 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:48.251 dc9c799c7d vhost: fix missing spinlock unlock 00:01:48.251 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:48.251 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:48.269 [Pipeline] writeFile 00:01:48.285 [Pipeline] sh 00:01:48.567 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:48.579 [Pipeline] sh 00:01:48.859 + cat autorun-spdk.conf 00:01:48.859 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:48.859 SPDK_TEST_NVMF=1 00:01:48.859 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:48.859 SPDK_TEST_URING=1 00:01:48.859 SPDK_TEST_USDT=1 00:01:48.859 SPDK_RUN_UBSAN=1 00:01:48.859 NET_TYPE=virt 00:01:48.859 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:48.859 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:48.859 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:48.866 RUN_NIGHTLY=1 00:01:48.869 [Pipeline] } 00:01:48.883 [Pipeline] // stage 00:01:48.899 [Pipeline] stage 00:01:48.901 [Pipeline] { (Run VM) 00:01:48.915 [Pipeline] sh 00:01:49.195 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:49.195 + echo 'Start stage prepare_nvme.sh' 00:01:49.195 Start stage prepare_nvme.sh 00:01:49.195 + [[ -n 0 ]] 00:01:49.195 + disk_prefix=ex0 00:01:49.195 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:49.195 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:49.195 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:49.195 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.195 ++ SPDK_TEST_NVMF=1 00:01:49.195 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.195 ++ SPDK_TEST_URING=1 00:01:49.195 ++ SPDK_TEST_USDT=1 00:01:49.195 ++ SPDK_RUN_UBSAN=1 00:01:49.195 ++ NET_TYPE=virt 00:01:49.195 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:49.195 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:49.195 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.195 ++ RUN_NIGHTLY=1 00:01:49.195 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:49.195 + nvme_files=() 00:01:49.195 + declare -A nvme_files 00:01:49.195 + backend_dir=/var/lib/libvirt/images/backends 00:01:49.195 + nvme_files['nvme.img']=5G 00:01:49.195 + nvme_files['nvme-cmb.img']=5G 00:01:49.195 + nvme_files['nvme-multi0.img']=4G 00:01:49.195 + nvme_files['nvme-multi1.img']=4G 00:01:49.195 + nvme_files['nvme-multi2.img']=4G 00:01:49.195 + nvme_files['nvme-openstack.img']=8G 00:01:49.195 + nvme_files['nvme-zns.img']=5G 00:01:49.195 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:49.195 + (( SPDK_TEST_FTL == 1 )) 00:01:49.195 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:49.196 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:49.196 + for nvme in "${!nvme_files[@]}" 00:01:49.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:49.196 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:49.196 + for nvme in "${!nvme_files[@]}" 00:01:49.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:49.196 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:49.196 + for nvme in "${!nvme_files[@]}" 00:01:49.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:49.196 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:49.196 + for nvme in "${!nvme_files[@]}" 00:01:49.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:49.196 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:49.196 + for nvme in "${!nvme_files[@]}" 00:01:49.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:49.196 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:49.196 + for nvme in "${!nvme_files[@]}" 00:01:49.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:49.196 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:49.196 + for nvme in "${!nvme_files[@]}" 00:01:49.196 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:49.455 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:49.455 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:49.455 + echo 'End stage prepare_nvme.sh' 00:01:49.455 End stage prepare_nvme.sh 00:01:49.467 [Pipeline] sh 00:01:49.748 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:49.748 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:01:49.748 00:01:49.748 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:49.748 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:49.748 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:49.748 HELP=0 00:01:49.748 DRY_RUN=0 00:01:49.748 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:49.748 NVME_DISKS_TYPE=nvme,nvme, 00:01:49.748 NVME_AUTO_CREATE=0 00:01:49.748 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:49.748 NVME_CMB=,, 00:01:49.748 NVME_PMR=,, 00:01:49.748 NVME_ZNS=,, 00:01:49.748 NVME_MS=,, 00:01:49.748 NVME_FDP=,, 00:01:49.748 SPDK_VAGRANT_DISTRO=fedora39 00:01:49.748 SPDK_VAGRANT_VMCPU=10 00:01:49.748 SPDK_VAGRANT_VMRAM=12288 00:01:49.748 SPDK_VAGRANT_PROVIDER=libvirt 00:01:49.748 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:49.748 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:49.748 SPDK_OPENSTACK_NETWORK=0 00:01:49.748 VAGRANT_PACKAGE_BOX=0 00:01:49.748 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:49.748 FORCE_DISTRO=true 00:01:49.748 VAGRANT_BOX_VERSION= 00:01:49.748 EXTRA_VAGRANTFILES= 00:01:49.748 NIC_MODEL=e1000 00:01:49.748 00:01:49.748 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:49.748 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:53.031 Bringing machine 'default' up with 'libvirt' provider... 00:01:53.031 ==> default: Creating image (snapshot of base box volume). 00:01:53.290 ==> default: Creating domain with the following settings... 00:01:53.290 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733610787_c5daf9fa0a7a80660208 00:01:53.290 ==> default: -- Domain type: kvm 00:01:53.290 ==> default: -- Cpus: 10 00:01:53.290 ==> default: -- Feature: acpi 00:01:53.290 ==> default: -- Feature: apic 00:01:53.290 ==> default: -- Feature: pae 00:01:53.290 ==> default: -- Memory: 12288M 00:01:53.290 ==> default: -- Memory Backing: hugepages: 00:01:53.290 ==> default: -- Management MAC: 00:01:53.290 ==> default: -- Loader: 00:01:53.290 ==> default: -- Nvram: 00:01:53.290 ==> default: -- Base box: spdk/fedora39 00:01:53.290 ==> default: -- Storage pool: default 00:01:53.290 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733610787_c5daf9fa0a7a80660208.img (20G) 00:01:53.290 ==> default: -- Volume Cache: default 00:01:53.290 ==> default: -- Kernel: 00:01:53.290 ==> default: -- Initrd: 00:01:53.290 ==> default: -- Graphics Type: vnc 00:01:53.290 ==> default: -- Graphics Port: -1 00:01:53.290 ==> default: -- Graphics IP: 127.0.0.1 00:01:53.290 ==> default: -- Graphics Password: Not defined 00:01:53.290 ==> default: -- Video Type: cirrus 00:01:53.290 ==> default: -- Video VRAM: 9216 00:01:53.290 ==> default: -- Sound Type: 00:01:53.290 ==> default: -- Keymap: en-us 00:01:53.290 ==> default: -- TPM Path: 00:01:53.290 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:53.291 ==> default: -- Command line args: 00:01:53.291 ==> default: -> value=-device, 00:01:53.291 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:53.291 ==> default: -> value=-drive, 00:01:53.291 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:53.291 ==> default: -> value=-device, 00:01:53.291 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.291 ==> default: -> value=-device, 00:01:53.291 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:53.291 ==> default: -> value=-drive, 00:01:53.291 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:53.291 ==> default: -> value=-device, 00:01:53.291 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.291 ==> default: -> value=-drive, 00:01:53.291 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:53.291 ==> default: -> value=-device, 00:01:53.291 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.291 ==> default: -> value=-drive, 00:01:53.291 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:53.291 ==> default: -> value=-device, 00:01:53.291 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:53.291 ==> default: Creating shared folders metadata... 00:01:53.291 ==> default: Starting domain. 00:01:54.702 ==> default: Waiting for domain to get an IP address... 00:02:12.785 ==> default: Waiting for SSH to become available... 00:02:13.721 ==> default: Configuring and enabling network interfaces... 00:02:17.917 default: SSH address: 192.168.121.138:22 00:02:17.917 default: SSH username: vagrant 00:02:17.917 default: SSH auth method: private key 00:02:20.445 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:27.004 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:32.325 ==> default: Mounting SSHFS shared folder... 00:02:33.702 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:33.702 ==> default: Checking Mount.. 00:02:35.077 ==> default: Folder Successfully Mounted! 00:02:35.077 ==> default: Running provisioner: file... 00:02:35.646 default: ~/.gitconfig => .gitconfig 00:02:36.213 00:02:36.213 SUCCESS! 00:02:36.213 00:02:36.213 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:36.213 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:36.213 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:36.213 00:02:36.222 [Pipeline] } 00:02:36.237 [Pipeline] // stage 00:02:36.247 [Pipeline] dir 00:02:36.248 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:36.250 [Pipeline] { 00:02:36.262 [Pipeline] catchError 00:02:36.264 [Pipeline] { 00:02:36.281 [Pipeline] sh 00:02:36.564 + vagrant ssh-config --host vagrant 00:02:36.564 + sed -ne /^Host/,$p 00:02:36.564 + tee ssh_conf 00:02:40.747 Host vagrant 00:02:40.747 HostName 192.168.121.138 00:02:40.747 User vagrant 00:02:40.747 Port 22 00:02:40.747 UserKnownHostsFile /dev/null 00:02:40.747 StrictHostKeyChecking no 00:02:40.747 PasswordAuthentication no 00:02:40.747 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:40.747 IdentitiesOnly yes 00:02:40.747 LogLevel FATAL 00:02:40.747 ForwardAgent yes 00:02:40.747 ForwardX11 yes 00:02:40.747 00:02:40.758 [Pipeline] withEnv 00:02:40.760 [Pipeline] { 00:02:40.771 [Pipeline] sh 00:02:41.043 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:41.043 source /etc/os-release 00:02:41.043 [[ -e /image.version ]] && img=$(< /image.version) 00:02:41.043 # Minimal, systemd-like check. 00:02:41.043 if [[ -e /.dockerenv ]]; then 00:02:41.043 # Clear garbage from the node's name: 00:02:41.043 # agt-er_autotest_547-896 -> autotest_547-896 00:02:41.043 # $HOSTNAME is the actual container id 00:02:41.043 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:41.043 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:41.043 # We can assume this is a mount from a host where container is running, 00:02:41.043 # so fetch its hostname to easily identify the target swarm worker. 00:02:41.043 container="$(< /etc/hostname) ($agent)" 00:02:41.043 else 00:02:41.043 # Fallback 00:02:41.043 container=$agent 00:02:41.043 fi 00:02:41.043 fi 00:02:41.043 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:41.043 00:02:41.311 [Pipeline] } 00:02:41.326 [Pipeline] // withEnv 00:02:41.334 [Pipeline] setCustomBuildProperty 00:02:41.346 [Pipeline] stage 00:02:41.348 [Pipeline] { (Tests) 00:02:41.364 [Pipeline] sh 00:02:41.640 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:41.964 [Pipeline] sh 00:02:42.243 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:42.259 [Pipeline] timeout 00:02:42.259 Timeout set to expire in 1 hr 0 min 00:02:42.261 [Pipeline] { 00:02:42.277 [Pipeline] sh 00:02:42.560 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:43.128 HEAD is now at b18e1bd62 version: v24.09.1-pre 00:02:43.141 [Pipeline] sh 00:02:43.428 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:43.703 [Pipeline] sh 00:02:43.983 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:44.259 [Pipeline] sh 00:02:44.541 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:44.799 ++ readlink -f spdk_repo 00:02:44.799 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:44.799 + [[ -n /home/vagrant/spdk_repo ]] 00:02:44.799 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:44.799 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:44.799 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:44.799 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:44.799 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:44.799 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:44.799 + cd /home/vagrant/spdk_repo 00:02:44.799 + source /etc/os-release 00:02:44.799 ++ NAME='Fedora Linux' 00:02:44.799 ++ VERSION='39 (Cloud Edition)' 00:02:44.800 ++ ID=fedora 00:02:44.800 ++ VERSION_ID=39 00:02:44.800 ++ VERSION_CODENAME= 00:02:44.800 ++ PLATFORM_ID=platform:f39 00:02:44.800 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:44.800 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:44.800 ++ LOGO=fedora-logo-icon 00:02:44.800 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:44.800 ++ HOME_URL=https://fedoraproject.org/ 00:02:44.800 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:44.800 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:44.800 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:44.800 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:44.800 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:44.800 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:44.800 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:44.800 ++ SUPPORT_END=2024-11-12 00:02:44.800 ++ VARIANT='Cloud Edition' 00:02:44.800 ++ VARIANT_ID=cloud 00:02:44.800 + uname -a 00:02:44.800 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:44.800 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:45.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:45.058 Hugepages 00:02:45.058 node hugesize free / total 00:02:45.317 node0 1048576kB 0 / 0 00:02:45.317 node0 2048kB 0 / 0 00:02:45.317 00:02:45.317 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:45.317 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:45.317 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:45.317 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:45.317 + rm -f /tmp/spdk-ld-path 00:02:45.317 + source autorun-spdk.conf 00:02:45.317 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.317 ++ SPDK_TEST_NVMF=1 00:02:45.318 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:45.318 ++ SPDK_TEST_URING=1 00:02:45.318 ++ SPDK_TEST_USDT=1 00:02:45.318 ++ SPDK_RUN_UBSAN=1 00:02:45.318 ++ NET_TYPE=virt 00:02:45.318 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:45.318 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:45.318 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.318 ++ RUN_NIGHTLY=1 00:02:45.318 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:45.318 + [[ -n '' ]] 00:02:45.318 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:45.318 + for M in /var/spdk/build-*-manifest.txt 00:02:45.318 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:45.318 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:45.318 + for M in /var/spdk/build-*-manifest.txt 00:02:45.318 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:45.318 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:45.318 + for M in /var/spdk/build-*-manifest.txt 00:02:45.318 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:45.318 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:45.318 ++ uname 00:02:45.318 + [[ Linux == \L\i\n\u\x ]] 00:02:45.318 + sudo dmesg -T 00:02:45.318 + sudo dmesg --clear 00:02:45.318 + dmesg_pid=5990 00:02:45.318 + [[ Fedora Linux == FreeBSD ]] 00:02:45.318 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:45.318 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:45.318 + sudo dmesg -Tw 00:02:45.318 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:45.318 + [[ -x /usr/src/fio-static/fio ]] 00:02:45.318 + export FIO_BIN=/usr/src/fio-static/fio 00:02:45.318 + FIO_BIN=/usr/src/fio-static/fio 00:02:45.318 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:45.318 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:45.318 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:45.318 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:45.318 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:45.318 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:45.318 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.318 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:45.318 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:45.318 Test configuration: 00:02:45.318 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:45.318 SPDK_TEST_NVMF=1 00:02:45.318 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:45.318 SPDK_TEST_URING=1 00:02:45.318 SPDK_TEST_USDT=1 00:02:45.318 SPDK_RUN_UBSAN=1 00:02:45.318 NET_TYPE=virt 00:02:45.318 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:45.318 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:45.318 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:45.577 RUN_NIGHTLY=1 22:34:00 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:45.577 22:34:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:45.577 22:34:00 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:45.577 22:34:00 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:45.577 22:34:00 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:45.577 22:34:00 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:45.577 22:34:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.577 22:34:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.577 22:34:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.577 22:34:00 -- paths/export.sh@5 -- $ export PATH 00:02:45.578 22:34:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:45.578 22:34:00 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:45.578 22:34:00 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:45.578 22:34:00 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733610840.XXXXXX 00:02:45.578 22:34:00 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733610840.R91TLj 00:02:45.578 22:34:00 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:45.578 22:34:00 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:45.578 22:34:00 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:45.578 22:34:00 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:45.578 22:34:00 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:45.578 22:34:00 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:45.578 22:34:00 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:45.578 22:34:00 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:45.578 22:34:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.578 22:34:00 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:45.578 22:34:00 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:45.578 22:34:00 -- pm/common@17 -- $ local monitor 00:02:45.578 22:34:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.578 22:34:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.578 22:34:00 -- pm/common@25 -- $ sleep 1 00:02:45.578 22:34:00 -- pm/common@21 -- $ date +%s 00:02:45.578 22:34:00 -- pm/common@21 -- $ date +%s 00:02:45.578 22:34:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733610840 00:02:45.578 22:34:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733610840 00:02:45.578 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733610840_collect-vmstat.pm.log 00:02:45.578 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733610840_collect-cpu-load.pm.log 00:02:46.515 22:34:01 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:46.515 22:34:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:46.515 22:34:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:46.515 22:34:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:46.515 22:34:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:46.515 Sat Dec 7 10:34:01 PM UTC 2024 00:02:46.515 22:34:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:46.515 v24.09-1-gb18e1bd62 00:02:46.515 22:34:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:46.515 22:34:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:46.515 22:34:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:46.515 22:34:01 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:46.515 22:34:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.515 22:34:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.515 ************************************ 00:02:46.515 START TEST ubsan 00:02:46.515 ************************************ 00:02:46.515 using ubsan 00:02:46.515 22:34:01 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:46.515 00:02:46.515 real 0m0.000s 00:02:46.515 user 0m0.000s 00:02:46.515 sys 0m0.000s 00:02:46.515 22:34:01 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:46.515 ************************************ 00:02:46.515 END TEST ubsan 00:02:46.515 ************************************ 00:02:46.515 22:34:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.515 22:34:01 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:46.515 22:34:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:46.515 22:34:01 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:46.515 22:34:01 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:46.515 22:34:01 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:46.515 22:34:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.515 ************************************ 00:02:46.515 START TEST build_native_dpdk 00:02:46.515 ************************************ 00:02:46.515 22:34:01 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:46.515 22:34:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:46.775 caf0f5d395 version: 22.11.4 00:02:46.775 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:46.775 dc9c799c7d vhost: fix missing spinlock unlock 00:02:46.775 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:46.775 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:46.775 patching file config/rte_config.h 00:02:46.775 Hunk #1 succeeded at 60 (offset 1 line). 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:46.775 patching file lib/pcapng/rte_pcapng.c 00:02:46.775 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:46.775 22:34:01 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:46.775 22:34:01 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:46.776 22:34:01 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:46.776 22:34:01 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:46.776 22:34:01 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:46.776 22:34:01 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:46.776 22:34:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:46.776 22:34:01 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:52.038 The Meson build system 00:02:52.038 Version: 1.5.0 00:02:52.038 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:52.038 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:52.038 Build type: native build 00:02:52.038 Program cat found: YES (/usr/bin/cat) 00:02:52.038 Project name: DPDK 00:02:52.038 Project version: 22.11.4 00:02:52.038 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:52.038 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:52.038 Host machine cpu family: x86_64 00:02:52.038 Host machine cpu: x86_64 00:02:52.038 Message: ## Building in Developer Mode ## 00:02:52.038 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:52.038 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:52.038 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:52.038 Program objdump found: YES (/usr/bin/objdump) 00:02:52.038 Program python3 found: YES (/usr/bin/python3) 00:02:52.038 Program cat found: YES (/usr/bin/cat) 00:02:52.038 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:52.038 Checking for size of "void *" : 8 00:02:52.038 Checking for size of "void *" : 8 (cached) 00:02:52.038 Library m found: YES 00:02:52.038 Library numa found: YES 00:02:52.038 Has header "numaif.h" : YES 00:02:52.038 Library fdt found: NO 00:02:52.038 Library execinfo found: NO 00:02:52.038 Has header "execinfo.h" : YES 00:02:52.038 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:52.038 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:52.038 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:52.038 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:52.038 Run-time dependency openssl found: YES 3.1.1 00:02:52.038 Run-time dependency libpcap found: YES 1.10.4 00:02:52.038 Has header "pcap.h" with dependency libpcap: YES 00:02:52.038 Compiler for C supports arguments -Wcast-qual: YES 00:02:52.038 Compiler for C supports arguments -Wdeprecated: YES 00:02:52.038 Compiler for C supports arguments -Wformat: YES 00:02:52.038 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:52.038 Compiler for C supports arguments -Wformat-security: NO 00:02:52.038 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:52.038 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:52.038 Compiler for C supports arguments -Wnested-externs: YES 00:02:52.038 Compiler for C supports arguments -Wold-style-definition: YES 00:02:52.038 Compiler for C supports arguments -Wpointer-arith: YES 00:02:52.038 Compiler for C supports arguments -Wsign-compare: YES 00:02:52.038 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:52.038 Compiler for C supports arguments -Wundef: YES 00:02:52.038 Compiler for C supports arguments -Wwrite-strings: YES 00:02:52.038 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:52.038 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:52.038 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:52.038 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:52.038 Compiler for C supports arguments -mavx512f: YES 00:02:52.038 Checking if "AVX512 checking" compiles: YES 00:02:52.038 Fetching value of define "__SSE4_2__" : 1 00:02:52.038 Fetching value of define "__AES__" : 1 00:02:52.038 Fetching value of define "__AVX__" : 1 00:02:52.038 Fetching value of define "__AVX2__" : 1 00:02:52.038 Fetching value of define "__AVX512BW__" : (undefined) 00:02:52.038 Fetching value of define "__AVX512CD__" : (undefined) 00:02:52.038 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:52.038 Fetching value of define "__AVX512F__" : (undefined) 00:02:52.038 Fetching value of define "__AVX512VL__" : (undefined) 00:02:52.038 Fetching value of define "__PCLMUL__" : 1 00:02:52.038 Fetching value of define "__RDRND__" : 1 00:02:52.038 Fetching value of define "__RDSEED__" : 1 00:02:52.038 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:52.038 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:52.038 Message: lib/kvargs: Defining dependency "kvargs" 00:02:52.038 Message: lib/telemetry: Defining dependency "telemetry" 00:02:52.038 Checking for function "getentropy" : YES 00:02:52.038 Message: lib/eal: Defining dependency "eal" 00:02:52.038 Message: lib/ring: Defining dependency "ring" 00:02:52.038 Message: lib/rcu: Defining dependency "rcu" 00:02:52.038 Message: lib/mempool: Defining dependency "mempool" 00:02:52.038 Message: lib/mbuf: Defining dependency "mbuf" 00:02:52.038 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:52.038 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:52.038 Compiler for C supports arguments -mpclmul: YES 00:02:52.038 Compiler for C supports arguments -maes: YES 00:02:52.038 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:52.038 Compiler for C supports arguments -mavx512bw: YES 00:02:52.038 Compiler for C supports arguments -mavx512dq: YES 00:02:52.038 Compiler for C supports arguments -mavx512vl: YES 00:02:52.038 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:52.038 Compiler for C supports arguments -mavx2: YES 00:02:52.038 Compiler for C supports arguments -mavx: YES 00:02:52.038 Message: lib/net: Defining dependency "net" 00:02:52.038 Message: lib/meter: Defining dependency "meter" 00:02:52.038 Message: lib/ethdev: Defining dependency "ethdev" 00:02:52.038 Message: lib/pci: Defining dependency "pci" 00:02:52.038 Message: lib/cmdline: Defining dependency "cmdline" 00:02:52.038 Message: lib/metrics: Defining dependency "metrics" 00:02:52.038 Message: lib/hash: Defining dependency "hash" 00:02:52.038 Message: lib/timer: Defining dependency "timer" 00:02:52.038 Fetching value of define "__AVX2__" : 1 (cached) 00:02:52.038 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:52.038 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:52.038 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:52.038 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:52.038 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:52.038 Message: lib/acl: Defining dependency "acl" 00:02:52.038 Message: lib/bbdev: Defining dependency "bbdev" 00:02:52.038 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:52.038 Run-time dependency libelf found: YES 0.191 00:02:52.038 Message: lib/bpf: Defining dependency "bpf" 00:02:52.038 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:52.038 Message: lib/compressdev: Defining dependency "compressdev" 00:02:52.038 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:52.038 Message: lib/distributor: Defining dependency "distributor" 00:02:52.038 Message: lib/efd: Defining dependency "efd" 00:02:52.038 Message: lib/eventdev: Defining dependency "eventdev" 00:02:52.038 Message: lib/gpudev: Defining dependency "gpudev" 00:02:52.038 Message: lib/gro: Defining dependency "gro" 00:02:52.038 Message: lib/gso: Defining dependency "gso" 00:02:52.038 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:52.038 Message: lib/jobstats: Defining dependency "jobstats" 00:02:52.038 Message: lib/latencystats: Defining dependency "latencystats" 00:02:52.038 Message: lib/lpm: Defining dependency "lpm" 00:02:52.038 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:52.038 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:52.038 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:52.038 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:52.038 Message: lib/member: Defining dependency "member" 00:02:52.038 Message: lib/pcapng: Defining dependency "pcapng" 00:02:52.038 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:52.038 Message: lib/power: Defining dependency "power" 00:02:52.038 Message: lib/rawdev: Defining dependency "rawdev" 00:02:52.038 Message: lib/regexdev: Defining dependency "regexdev" 00:02:52.038 Message: lib/dmadev: Defining dependency "dmadev" 00:02:52.038 Message: lib/rib: Defining dependency "rib" 00:02:52.038 Message: lib/reorder: Defining dependency "reorder" 00:02:52.038 Message: lib/sched: Defining dependency "sched" 00:02:52.038 Message: lib/security: Defining dependency "security" 00:02:52.038 Message: lib/stack: Defining dependency "stack" 00:02:52.038 Has header "linux/userfaultfd.h" : YES 00:02:52.038 Message: lib/vhost: Defining dependency "vhost" 00:02:52.038 Message: lib/ipsec: Defining dependency "ipsec" 00:02:52.039 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:52.039 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:52.039 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:52.039 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:52.039 Message: lib/fib: Defining dependency "fib" 00:02:52.039 Message: lib/port: Defining dependency "port" 00:02:52.039 Message: lib/pdump: Defining dependency "pdump" 00:02:52.039 Message: lib/table: Defining dependency "table" 00:02:52.039 Message: lib/pipeline: Defining dependency "pipeline" 00:02:52.039 Message: lib/graph: Defining dependency "graph" 00:02:52.039 Message: lib/node: Defining dependency "node" 00:02:52.039 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:52.039 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:52.039 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:52.039 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:52.039 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:52.039 Compiler for C supports arguments -Wno-unused-value: YES 00:02:52.039 Compiler for C supports arguments -Wno-format: YES 00:02:52.039 Compiler for C supports arguments -Wno-format-security: YES 00:02:52.039 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:53.414 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:53.414 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:53.414 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:53.414 Fetching value of define "__AVX2__" : 1 (cached) 00:02:53.414 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:53.414 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:53.414 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:53.414 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:53.414 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:53.414 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:53.414 Configuring doxy-api.conf using configuration 00:02:53.414 Program sphinx-build found: NO 00:02:53.414 Configuring rte_build_config.h using configuration 00:02:53.414 Message: 00:02:53.414 ================= 00:02:53.414 Applications Enabled 00:02:53.414 ================= 00:02:53.414 00:02:53.414 apps: 00:02:53.414 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:53.414 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:53.414 test-security-perf, 00:02:53.414 00:02:53.414 Message: 00:02:53.414 ================= 00:02:53.414 Libraries Enabled 00:02:53.414 ================= 00:02:53.414 00:02:53.414 libs: 00:02:53.414 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:53.414 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:53.414 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:53.414 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:53.414 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:53.414 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:53.414 table, pipeline, graph, node, 00:02:53.414 00:02:53.414 Message: 00:02:53.414 =============== 00:02:53.414 Drivers Enabled 00:02:53.414 =============== 00:02:53.414 00:02:53.414 common: 00:02:53.414 00:02:53.414 bus: 00:02:53.414 pci, vdev, 00:02:53.414 mempool: 00:02:53.414 ring, 00:02:53.414 dma: 00:02:53.414 00:02:53.414 net: 00:02:53.414 i40e, 00:02:53.414 raw: 00:02:53.414 00:02:53.414 crypto: 00:02:53.414 00:02:53.414 compress: 00:02:53.414 00:02:53.414 regex: 00:02:53.414 00:02:53.414 vdpa: 00:02:53.414 00:02:53.414 event: 00:02:53.414 00:02:53.414 baseband: 00:02:53.414 00:02:53.414 gpu: 00:02:53.414 00:02:53.414 00:02:53.414 Message: 00:02:53.414 ================= 00:02:53.414 Content Skipped 00:02:53.414 ================= 00:02:53.414 00:02:53.414 apps: 00:02:53.414 00:02:53.414 libs: 00:02:53.414 kni: explicitly disabled via build config (deprecated lib) 00:02:53.414 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:53.414 00:02:53.414 drivers: 00:02:53.414 common/cpt: not in enabled drivers build config 00:02:53.414 common/dpaax: not in enabled drivers build config 00:02:53.414 common/iavf: not in enabled drivers build config 00:02:53.414 common/idpf: not in enabled drivers build config 00:02:53.414 common/mvep: not in enabled drivers build config 00:02:53.414 common/octeontx: not in enabled drivers build config 00:02:53.414 bus/auxiliary: not in enabled drivers build config 00:02:53.414 bus/dpaa: not in enabled drivers build config 00:02:53.414 bus/fslmc: not in enabled drivers build config 00:02:53.414 bus/ifpga: not in enabled drivers build config 00:02:53.414 bus/vmbus: not in enabled drivers build config 00:02:53.414 common/cnxk: not in enabled drivers build config 00:02:53.414 common/mlx5: not in enabled drivers build config 00:02:53.414 common/qat: not in enabled drivers build config 00:02:53.414 common/sfc_efx: not in enabled drivers build config 00:02:53.414 mempool/bucket: not in enabled drivers build config 00:02:53.414 mempool/cnxk: not in enabled drivers build config 00:02:53.414 mempool/dpaa: not in enabled drivers build config 00:02:53.414 mempool/dpaa2: not in enabled drivers build config 00:02:53.415 mempool/octeontx: not in enabled drivers build config 00:02:53.415 mempool/stack: not in enabled drivers build config 00:02:53.415 dma/cnxk: not in enabled drivers build config 00:02:53.415 dma/dpaa: not in enabled drivers build config 00:02:53.415 dma/dpaa2: not in enabled drivers build config 00:02:53.415 dma/hisilicon: not in enabled drivers build config 00:02:53.415 dma/idxd: not in enabled drivers build config 00:02:53.415 dma/ioat: not in enabled drivers build config 00:02:53.415 dma/skeleton: not in enabled drivers build config 00:02:53.415 net/af_packet: not in enabled drivers build config 00:02:53.415 net/af_xdp: not in enabled drivers build config 00:02:53.415 net/ark: not in enabled drivers build config 00:02:53.415 net/atlantic: not in enabled drivers build config 00:02:53.415 net/avp: not in enabled drivers build config 00:02:53.415 net/axgbe: not in enabled drivers build config 00:02:53.415 net/bnx2x: not in enabled drivers build config 00:02:53.415 net/bnxt: not in enabled drivers build config 00:02:53.415 net/bonding: not in enabled drivers build config 00:02:53.415 net/cnxk: not in enabled drivers build config 00:02:53.415 net/cxgbe: not in enabled drivers build config 00:02:53.415 net/dpaa: not in enabled drivers build config 00:02:53.415 net/dpaa2: not in enabled drivers build config 00:02:53.415 net/e1000: not in enabled drivers build config 00:02:53.415 net/ena: not in enabled drivers build config 00:02:53.415 net/enetc: not in enabled drivers build config 00:02:53.415 net/enetfec: not in enabled drivers build config 00:02:53.415 net/enic: not in enabled drivers build config 00:02:53.415 net/failsafe: not in enabled drivers build config 00:02:53.415 net/fm10k: not in enabled drivers build config 00:02:53.415 net/gve: not in enabled drivers build config 00:02:53.415 net/hinic: not in enabled drivers build config 00:02:53.415 net/hns3: not in enabled drivers build config 00:02:53.415 net/iavf: not in enabled drivers build config 00:02:53.415 net/ice: not in enabled drivers build config 00:02:53.415 net/idpf: not in enabled drivers build config 00:02:53.415 net/igc: not in enabled drivers build config 00:02:53.415 net/ionic: not in enabled drivers build config 00:02:53.415 net/ipn3ke: not in enabled drivers build config 00:02:53.415 net/ixgbe: not in enabled drivers build config 00:02:53.415 net/kni: not in enabled drivers build config 00:02:53.415 net/liquidio: not in enabled drivers build config 00:02:53.415 net/mana: not in enabled drivers build config 00:02:53.415 net/memif: not in enabled drivers build config 00:02:53.415 net/mlx4: not in enabled drivers build config 00:02:53.415 net/mlx5: not in enabled drivers build config 00:02:53.415 net/mvneta: not in enabled drivers build config 00:02:53.415 net/mvpp2: not in enabled drivers build config 00:02:53.415 net/netvsc: not in enabled drivers build config 00:02:53.415 net/nfb: not in enabled drivers build config 00:02:53.415 net/nfp: not in enabled drivers build config 00:02:53.415 net/ngbe: not in enabled drivers build config 00:02:53.415 net/null: not in enabled drivers build config 00:02:53.415 net/octeontx: not in enabled drivers build config 00:02:53.415 net/octeon_ep: not in enabled drivers build config 00:02:53.415 net/pcap: not in enabled drivers build config 00:02:53.415 net/pfe: not in enabled drivers build config 00:02:53.415 net/qede: not in enabled drivers build config 00:02:53.415 net/ring: not in enabled drivers build config 00:02:53.415 net/sfc: not in enabled drivers build config 00:02:53.415 net/softnic: not in enabled drivers build config 00:02:53.415 net/tap: not in enabled drivers build config 00:02:53.415 net/thunderx: not in enabled drivers build config 00:02:53.415 net/txgbe: not in enabled drivers build config 00:02:53.415 net/vdev_netvsc: not in enabled drivers build config 00:02:53.415 net/vhost: not in enabled drivers build config 00:02:53.415 net/virtio: not in enabled drivers build config 00:02:53.415 net/vmxnet3: not in enabled drivers build config 00:02:53.415 raw/cnxk_bphy: not in enabled drivers build config 00:02:53.415 raw/cnxk_gpio: not in enabled drivers build config 00:02:53.415 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:53.415 raw/ifpga: not in enabled drivers build config 00:02:53.415 raw/ntb: not in enabled drivers build config 00:02:53.415 raw/skeleton: not in enabled drivers build config 00:02:53.415 crypto/armv8: not in enabled drivers build config 00:02:53.415 crypto/bcmfs: not in enabled drivers build config 00:02:53.415 crypto/caam_jr: not in enabled drivers build config 00:02:53.415 crypto/ccp: not in enabled drivers build config 00:02:53.415 crypto/cnxk: not in enabled drivers build config 00:02:53.415 crypto/dpaa_sec: not in enabled drivers build config 00:02:53.415 crypto/dpaa2_sec: not in enabled drivers build config 00:02:53.415 crypto/ipsec_mb: not in enabled drivers build config 00:02:53.415 crypto/mlx5: not in enabled drivers build config 00:02:53.415 crypto/mvsam: not in enabled drivers build config 00:02:53.415 crypto/nitrox: not in enabled drivers build config 00:02:53.415 crypto/null: not in enabled drivers build config 00:02:53.415 crypto/octeontx: not in enabled drivers build config 00:02:53.415 crypto/openssl: not in enabled drivers build config 00:02:53.415 crypto/scheduler: not in enabled drivers build config 00:02:53.415 crypto/uadk: not in enabled drivers build config 00:02:53.415 crypto/virtio: not in enabled drivers build config 00:02:53.415 compress/isal: not in enabled drivers build config 00:02:53.415 compress/mlx5: not in enabled drivers build config 00:02:53.415 compress/octeontx: not in enabled drivers build config 00:02:53.415 compress/zlib: not in enabled drivers build config 00:02:53.415 regex/mlx5: not in enabled drivers build config 00:02:53.415 regex/cn9k: not in enabled drivers build config 00:02:53.415 vdpa/ifc: not in enabled drivers build config 00:02:53.415 vdpa/mlx5: not in enabled drivers build config 00:02:53.415 vdpa/sfc: not in enabled drivers build config 00:02:53.415 event/cnxk: not in enabled drivers build config 00:02:53.415 event/dlb2: not in enabled drivers build config 00:02:53.415 event/dpaa: not in enabled drivers build config 00:02:53.415 event/dpaa2: not in enabled drivers build config 00:02:53.415 event/dsw: not in enabled drivers build config 00:02:53.415 event/opdl: not in enabled drivers build config 00:02:53.415 event/skeleton: not in enabled drivers build config 00:02:53.415 event/sw: not in enabled drivers build config 00:02:53.415 event/octeontx: not in enabled drivers build config 00:02:53.415 baseband/acc: not in enabled drivers build config 00:02:53.415 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:53.415 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:53.415 baseband/la12xx: not in enabled drivers build config 00:02:53.415 baseband/null: not in enabled drivers build config 00:02:53.415 baseband/turbo_sw: not in enabled drivers build config 00:02:53.415 gpu/cuda: not in enabled drivers build config 00:02:53.415 00:02:53.415 00:02:53.415 Build targets in project: 314 00:02:53.415 00:02:53.415 DPDK 22.11.4 00:02:53.415 00:02:53.415 User defined options 00:02:53.415 libdir : lib 00:02:53.415 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:53.415 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:53.415 c_link_args : 00:02:53.415 enable_docs : false 00:02:53.415 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:53.415 enable_kmods : false 00:02:53.415 machine : native 00:02:53.415 tests : false 00:02:53.415 00:02:53.415 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:53.415 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:53.415 22:34:08 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:53.415 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:53.674 [1/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:53.674 [2/743] Generating lib/rte_telemetry_def with a custom command 00:02:53.674 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:53.674 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:53.674 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:53.674 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:53.674 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:53.674 [8/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:53.674 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:53.674 [10/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:53.674 [11/743] Linking static target lib/librte_kvargs.a 00:02:53.674 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:53.674 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:53.674 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:53.932 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.932 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.932 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:53.932 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.932 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.932 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.932 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.932 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:53.932 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:53.932 [24/743] Linking target lib/librte_kvargs.so.23.0 00:02:53.932 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:54.191 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:54.191 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:54.191 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:54.191 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:54.191 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:54.191 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:54.191 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:54.191 [33/743] Linking static target lib/librte_telemetry.a 00:02:54.191 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:54.191 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:54.191 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:54.449 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:54.449 [38/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:54.449 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:54.449 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:54.449 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:54.449 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.449 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.708 [44/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.708 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.708 [46/743] Linking target lib/librte_telemetry.so.23.0 00:02:54.708 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:54.708 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.708 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.708 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.708 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.708 [52/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:54.708 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.708 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.708 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.708 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.966 [57/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.966 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:54.966 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.966 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.966 [61/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.966 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.966 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.966 [64/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:54.966 [65/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.966 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:54.966 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:54.966 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:54.966 [69/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:55.225 [70/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:55.225 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:55.225 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:55.225 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:55.225 [74/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:55.225 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:55.225 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:55.225 [77/743] Generating lib/rte_eal_def with a custom command 00:02:55.225 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:55.225 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.225 [80/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:55.225 [81/743] Generating lib/rte_ring_def with a custom command 00:02:55.225 [82/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:55.225 [83/743] Generating lib/rte_ring_mingw with a custom command 00:02:55.225 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:55.225 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:55.225 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.484 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.484 [88/743] Linking static target lib/librte_ring.a 00:02:55.484 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:55.484 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:55.484 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:55.484 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:55.484 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.484 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.742 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:55.742 [96/743] Linking static target lib/librte_eal.a 00:02:56.002 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:56.002 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:56.002 [99/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:56.002 [100/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:56.002 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:56.002 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:56.002 [103/743] Linking static target lib/librte_rcu.a 00:02:56.002 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:56.261 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:56.261 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:56.261 [107/743] Linking static target lib/librte_mempool.a 00:02:56.261 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:56.520 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.520 [110/743] Generating lib/rte_net_def with a custom command 00:02:56.520 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:56.520 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:56.520 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:56.520 [114/743] Generating lib/rte_meter_def with a custom command 00:02:56.520 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:56.520 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:56.520 [117/743] Linking static target lib/librte_meter.a 00:02:56.799 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:56.799 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:56.799 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:56.799 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.799 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:57.085 [123/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:57.085 [124/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:57.085 [125/743] Linking static target lib/librte_net.a 00:02:57.085 [126/743] Linking static target lib/librte_mbuf.a 00:02:57.085 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.350 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.350 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:57.350 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:57.350 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:57.350 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:57.608 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:57.608 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.867 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:58.126 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:58.126 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:58.126 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:58.126 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:58.126 [140/743] Generating lib/rte_pci_def with a custom command 00:02:58.126 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:58.126 [142/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:58.385 [143/743] Linking static target lib/librte_pci.a 00:02:58.385 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:58.385 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:58.385 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:58.385 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:58.385 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:58.385 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.385 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:58.385 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:58.385 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:58.644 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:58.644 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:58.644 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:58.644 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:58.644 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:58.644 [158/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:58.644 [159/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:58.644 [160/743] Generating lib/rte_metrics_def with a custom command 00:02:58.644 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:02:58.644 [162/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:58.644 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:58.644 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:58.903 [165/743] Generating lib/rte_hash_def with a custom command 00:02:58.903 [166/743] Generating lib/rte_hash_mingw with a custom command 00:02:58.903 [167/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:58.903 [168/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:58.903 [169/743] Generating lib/rte_timer_def with a custom command 00:02:58.903 [170/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:58.903 [171/743] Generating lib/rte_timer_mingw with a custom command 00:02:58.903 [172/743] Linking static target lib/librte_cmdline.a 00:02:58.903 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:59.162 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:59.162 [175/743] Linking static target lib/librte_metrics.a 00:02:59.420 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:59.420 [177/743] Linking static target lib/librte_timer.a 00:02:59.678 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.678 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.937 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:59.937 [181/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:59.937 [182/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.937 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:59.937 [184/743] Linking static target lib/librte_ethdev.a 00:03:00.502 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:00.502 [186/743] Generating lib/rte_acl_def with a custom command 00:03:00.502 [187/743] Generating lib/rte_acl_mingw with a custom command 00:03:00.502 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:00.502 [189/743] Generating lib/rte_bbdev_def with a custom command 00:03:00.502 [190/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:00.502 [191/743] Generating lib/rte_bbdev_mingw with a custom command 00:03:00.502 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:03:00.502 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:03:01.066 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:01.066 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:01.066 [196/743] Linking static target lib/librte_bitratestats.a 00:03:01.066 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:01.325 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.325 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:01.325 [200/743] Linking static target lib/librte_bbdev.a 00:03:01.584 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:01.584 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:01.584 [203/743] Linking static target lib/librte_hash.a 00:03:01.843 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:02.102 [205/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:02.102 [206/743] Linking static target lib/acl/libavx512_tmp.a 00:03:02.102 [207/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:02.102 [208/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.102 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:02.360 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.360 [211/743] Generating lib/rte_bpf_def with a custom command 00:03:02.360 [212/743] Generating lib/rte_bpf_mingw with a custom command 00:03:02.360 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:02.360 [214/743] Generating lib/rte_cfgfile_def with a custom command 00:03:02.618 [215/743] Generating lib/rte_cfgfile_mingw with a custom command 00:03:02.618 [216/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:02.618 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:02.618 [218/743] Linking static target lib/librte_acl.a 00:03:02.618 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:02.618 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:02.618 [221/743] Linking static target lib/librte_cfgfile.a 00:03:02.877 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:02.877 [223/743] Generating lib/rte_compressdev_def with a custom command 00:03:02.877 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:03:02.877 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.877 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.134 [227/743] Linking target lib/librte_eal.so.23.0 00:03:03.134 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.134 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.134 [230/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:03.134 [231/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:03.134 [232/743] Generating lib/rte_cryptodev_def with a custom command 00:03:03.134 [233/743] Linking target lib/librte_ring.so.23.0 00:03:03.134 [234/743] Linking target lib/librte_pci.so.23.0 00:03:03.134 [235/743] Linking target lib/librte_meter.so.23.0 00:03:03.392 [236/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:03.392 [237/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:03.392 [238/743] Linking target lib/librte_timer.so.23.0 00:03:03.392 [239/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:03.392 [240/743] Linking target lib/librte_rcu.so.23.0 00:03:03.392 [241/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:03.392 [242/743] Linking target lib/librte_mempool.so.23.0 00:03:03.392 [243/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:03.392 [244/743] Linking static target lib/librte_bpf.a 00:03:03.392 [245/743] Linking target lib/librte_acl.so.23.0 00:03:03.392 [246/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:03.392 [247/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:03.392 [248/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:03.392 [249/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:03.392 [250/743] Linking static target lib/librte_compressdev.a 00:03:03.649 [251/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:03.649 [252/743] Linking target lib/librte_cfgfile.so.23.0 00:03:03.649 [253/743] Generating lib/rte_cryptodev_mingw with a custom command 00:03:03.649 [254/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:03.649 [255/743] Linking target lib/librte_mbuf.so.23.0 00:03:03.649 [256/743] Generating lib/rte_distributor_def with a custom command 00:03:03.649 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:03:03.649 [258/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:03.649 [259/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:03.649 [260/743] Linking target lib/librte_net.so.23.0 00:03:03.649 [261/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.649 [262/743] Generating lib/rte_efd_def with a custom command 00:03:03.649 [263/743] Linking target lib/librte_bbdev.so.23.0 00:03:03.906 [264/743] Generating lib/rte_efd_mingw with a custom command 00:03:03.906 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:03.906 [266/743] Linking target lib/librte_cmdline.so.23.0 00:03:03.906 [267/743] Linking target lib/librte_hash.so.23.0 00:03:04.163 [268/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:04.164 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:04.164 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:04.421 [271/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.421 [272/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:04.421 [273/743] Linking target lib/librte_compressdev.so.23.0 00:03:04.421 [274/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.421 [275/743] Linking target lib/librte_ethdev.so.23.0 00:03:04.678 [276/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:04.678 [277/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:04.678 [278/743] Linking static target lib/librte_distributor.a 00:03:04.678 [279/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:04.678 [280/743] Linking target lib/librte_metrics.so.23.0 00:03:04.935 [281/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.935 [282/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:04.935 [283/743] Linking target lib/librte_bpf.so.23.0 00:03:04.935 [284/743] Linking target lib/librte_bitratestats.so.23.0 00:03:04.935 [285/743] Linking target lib/librte_distributor.so.23.0 00:03:04.935 [286/743] Generating lib/rte_eventdev_def with a custom command 00:03:04.935 [287/743] Generating lib/rte_eventdev_mingw with a custom command 00:03:04.935 [288/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:04.935 [289/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:04.935 [290/743] Generating lib/rte_gpudev_def with a custom command 00:03:04.935 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:03:05.192 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:05.192 [293/743] Linking static target lib/librte_efd.a 00:03:05.448 [294/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.448 [295/743] Linking target lib/librte_efd.so.23.0 00:03:05.448 [296/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.448 [297/743] Linking static target lib/librte_cryptodev.a 00:03:05.705 [298/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:05.705 [299/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:05.962 [300/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:05.963 [301/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:05.963 [302/743] Generating lib/rte_gro_def with a custom command 00:03:05.963 [303/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:05.963 [304/743] Generating lib/rte_gro_mingw with a custom command 00:03:05.963 [305/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:05.963 [306/743] Linking static target lib/librte_gpudev.a 00:03:06.219 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:06.219 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:06.477 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:06.477 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:06.477 [311/743] Generating lib/rte_gso_def with a custom command 00:03:06.477 [312/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:06.477 [313/743] Generating lib/rte_gso_mingw with a custom command 00:03:06.477 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:06.735 [315/743] Linking static target lib/librte_gro.a 00:03:06.735 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.735 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:06.735 [318/743] Linking target lib/librte_gpudev.so.23.0 00:03:06.735 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:06.735 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.993 [321/743] Linking target lib/librte_gro.so.23.0 00:03:06.993 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:06.993 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:03:06.993 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:03:06.993 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:06.993 [326/743] Linking static target lib/librte_eventdev.a 00:03:07.251 [327/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:07.251 [328/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:07.251 [329/743] Linking static target lib/librte_jobstats.a 00:03:07.251 [330/743] Linking static target lib/librte_gso.a 00:03:07.251 [331/743] Generating lib/rte_jobstats_def with a custom command 00:03:07.251 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:03:07.251 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.510 [334/743] Linking target lib/librte_gso.so.23.0 00:03:07.510 [335/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:07.510 [336/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:07.510 [337/743] Generating lib/rte_latencystats_def with a custom command 00:03:07.510 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:07.510 [339/743] Generating lib/rte_latencystats_mingw with a custom command 00:03:07.510 [340/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:07.510 [341/743] Generating lib/rte_lpm_def with a custom command 00:03:07.510 [342/743] Generating lib/rte_lpm_mingw with a custom command 00:03:07.510 [343/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.510 [344/743] Linking target lib/librte_jobstats.so.23.0 00:03:07.510 [345/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.769 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:07.769 [347/743] Linking target lib/librte_cryptodev.so.23.0 00:03:07.769 [348/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:07.769 [349/743] Linking static target lib/librte_ip_frag.a 00:03:07.769 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:08.027 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.027 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:03:08.286 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:08.286 [354/743] Linking static target lib/librte_latencystats.a 00:03:08.286 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:08.286 [356/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:08.286 [357/743] Generating lib/rte_member_def with a custom command 00:03:08.286 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:08.286 [359/743] Generating lib/rte_member_mingw with a custom command 00:03:08.286 [360/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:08.286 [361/743] Generating lib/rte_pcapng_def with a custom command 00:03:08.286 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:03:08.286 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:08.286 [364/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:08.286 [365/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.545 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:08.545 [367/743] Linking target lib/librte_latencystats.so.23.0 00:03:08.545 [368/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:08.545 [369/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:08.545 [370/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:08.804 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:08.804 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:08.804 [373/743] Generating lib/rte_power_def with a custom command 00:03:08.804 [374/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:08.804 [375/743] Generating lib/rte_power_mingw with a custom command 00:03:08.804 [376/743] Linking static target lib/librte_lpm.a 00:03:09.063 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.063 [378/743] Linking target lib/librte_eventdev.so.23.0 00:03:09.063 [379/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:09.063 [380/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:09.063 [381/743] Generating lib/rte_rawdev_def with a custom command 00:03:09.063 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:03:09.063 [383/743] Generating lib/rte_regexdev_def with a custom command 00:03:09.321 [384/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:09.321 [385/743] Generating lib/rte_regexdev_mingw with a custom command 00:03:09.321 [386/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.322 [387/743] Generating lib/rte_dmadev_def with a custom command 00:03:09.322 [388/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:09.322 [389/743] Linking static target lib/librte_pcapng.a 00:03:09.322 [390/743] Generating lib/rte_dmadev_mingw with a custom command 00:03:09.322 [391/743] Linking target lib/librte_lpm.so.23.0 00:03:09.322 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:09.322 [393/743] Linking static target lib/librte_rawdev.a 00:03:09.322 [394/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:09.322 [395/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:09.322 [396/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:09.322 [397/743] Generating lib/rte_rib_mingw with a custom command 00:03:09.322 [398/743] Generating lib/rte_rib_def with a custom command 00:03:09.581 [399/743] Generating lib/rte_reorder_def with a custom command 00:03:09.581 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:03:09.581 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.581 [402/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:09.581 [403/743] Linking static target lib/librte_power.a 00:03:09.581 [404/743] Linking target lib/librte_pcapng.so.23.0 00:03:09.581 [405/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:09.581 [406/743] Linking static target lib/librte_dmadev.a 00:03:09.847 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:09.847 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.847 [409/743] Linking target lib/librte_rawdev.so.23.0 00:03:09.847 [410/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:09.847 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:09.847 [412/743] Linking static target lib/librte_regexdev.a 00:03:09.847 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:09.847 [414/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:09.847 [415/743] Generating lib/rte_sched_def with a custom command 00:03:09.847 [416/743] Generating lib/rte_sched_mingw with a custom command 00:03:10.105 [417/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:10.105 [418/743] Linking static target lib/librte_member.a 00:03:10.105 [419/743] Generating lib/rte_security_def with a custom command 00:03:10.105 [420/743] Generating lib/rte_security_mingw with a custom command 00:03:10.105 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:10.105 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.105 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:10.105 [424/743] Linking target lib/librte_dmadev.so.23.0 00:03:10.364 [425/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:10.364 [426/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:10.364 [427/743] Linking static target lib/librte_stack.a 00:03:10.364 [428/743] Generating lib/rte_stack_def with a custom command 00:03:10.364 [429/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:10.364 [430/743] Linking static target lib/librte_reorder.a 00:03:10.364 [431/743] Generating lib/rte_stack_mingw with a custom command 00:03:10.364 [432/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.364 [433/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:10.364 [434/743] Linking target lib/librte_member.so.23.0 00:03:10.364 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:10.364 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.622 [437/743] Linking target lib/librte_stack.so.23.0 00:03:10.622 [438/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:10.622 [439/743] Linking static target lib/librte_rib.a 00:03:10.622 [440/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.622 [441/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.622 [442/743] Linking target lib/librte_power.so.23.0 00:03:10.622 [443/743] Linking target lib/librte_reorder.so.23.0 00:03:10.622 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.622 [445/743] Linking target lib/librte_regexdev.so.23.0 00:03:10.879 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:10.879 [447/743] Linking static target lib/librte_security.a 00:03:10.879 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.879 [449/743] Linking target lib/librte_rib.so.23.0 00:03:11.136 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:11.136 [451/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:11.136 [452/743] Generating lib/rte_vhost_def with a custom command 00:03:11.136 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:03:11.137 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:11.394 [455/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:11.394 [456/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.394 [457/743] Linking target lib/librte_security.so.23.0 00:03:11.394 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:11.394 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:11.394 [460/743] Linking static target lib/librte_sched.a 00:03:11.959 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.960 [462/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:11.960 [463/743] Linking target lib/librte_sched.so.23.0 00:03:11.960 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:11.960 [465/743] Generating lib/rte_ipsec_def with a custom command 00:03:11.960 [466/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:12.218 [467/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:12.218 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:12.218 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:12.218 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:12.476 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:12.735 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:12.735 [473/743] Generating lib/rte_fib_def with a custom command 00:03:12.735 [474/743] Generating lib/rte_fib_mingw with a custom command 00:03:12.735 [475/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:12.735 [476/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:12.735 [477/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:12.735 [478/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:12.735 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:12.994 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:12.994 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:12.994 [482/743] Linking static target lib/librte_ipsec.a 00:03:13.560 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.560 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:13.560 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:13.560 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:13.560 [487/743] Linking static target lib/librte_fib.a 00:03:13.819 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:13.819 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:13.819 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:13.819 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:14.077 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.077 [493/743] Linking target lib/librte_fib.so.23.0 00:03:14.077 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:14.644 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:14.644 [496/743] Generating lib/rte_port_def with a custom command 00:03:14.644 [497/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:14.903 [498/743] Generating lib/rte_port_mingw with a custom command 00:03:14.903 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:14.903 [500/743] Generating lib/rte_pdump_def with a custom command 00:03:14.903 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:03:14.903 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:14.903 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:14.903 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:15.162 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:15.162 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:15.162 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:15.162 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:15.421 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:15.421 [510/743] Linking static target lib/librte_port.a 00:03:15.680 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:15.680 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:15.943 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:15.943 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:15.943 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:16.233 [516/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:16.234 [517/743] Linking static target lib/librte_pdump.a 00:03:16.234 [518/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.234 [519/743] Linking target lib/librte_port.so.23.0 00:03:16.516 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:16.516 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.516 [522/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:16.516 [523/743] Linking target lib/librte_pdump.so.23.0 00:03:16.516 [524/743] Generating lib/rte_table_def with a custom command 00:03:16.516 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:16.775 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:16.775 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:17.033 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:17.033 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:17.291 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:17.291 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:17.291 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:17.291 [533/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:17.291 [534/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:17.291 [535/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:17.291 [536/743] Linking static target lib/librte_table.a 00:03:17.549 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:17.807 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:17.807 [539/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.066 [540/743] Linking target lib/librte_table.so.23.0 00:03:18.066 [541/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:18.066 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:18.066 [543/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:18.066 [544/743] Generating lib/rte_graph_def with a custom command 00:03:18.066 [545/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:18.066 [546/743] Generating lib/rte_graph_mingw with a custom command 00:03:18.324 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:18.582 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:18.840 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:18.840 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:18.840 [551/743] Linking static target lib/librte_graph.a 00:03:18.840 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:19.098 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:19.098 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:19.098 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:19.356 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:19.356 [557/743] Generating lib/rte_node_def with a custom command 00:03:19.615 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:19.615 [559/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.615 [560/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:19.615 [561/743] Linking target lib/librte_graph.so.23.0 00:03:19.615 [562/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:19.615 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:19.873 [564/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:19.873 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:19.873 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:19.873 [567/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:19.873 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:19.873 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:19.873 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:19.873 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:19.873 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.131 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:20.131 [574/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:20.131 [575/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:20.131 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:20.131 [577/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.131 [578/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.131 [579/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:20.131 [580/743] Linking static target lib/librte_node.a 00:03:20.131 [581/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.389 [582/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.389 [583/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.389 [584/743] Linking static target drivers/librte_bus_vdev.a 00:03:20.389 [585/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.389 [586/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.389 [587/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.390 [588/743] Linking target lib/librte_node.so.23.0 00:03:20.390 [589/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.647 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.647 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.647 [592/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:20.647 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.647 [594/743] Linking static target drivers/librte_bus_pci.a 00:03:20.647 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.904 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:21.161 [597/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.161 [598/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:21.161 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:21.161 [600/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:21.161 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:21.161 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:21.419 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:21.419 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:21.419 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:21.419 [606/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:21.419 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.419 [608/743] Linking static target drivers/librte_mempool_ring.a 00:03:21.419 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:21.677 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:22.245 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:22.245 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:22.503 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:22.503 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:22.762 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:23.021 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:23.021 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:23.279 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:23.536 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:23.793 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:23.793 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:23.793 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:23.793 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:23.793 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:24.050 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:24.982 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:25.238 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:25.238 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:25.238 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:25.238 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:25.495 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:25.495 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:25.495 [633/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:25.495 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:25.753 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:25.753 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:26.319 [637/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:26.319 [638/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:26.319 [639/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:26.577 [640/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:26.577 [641/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:26.577 [642/743] Linking static target drivers/librte_net_i40e.a 00:03:26.577 [643/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:26.577 [644/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:26.577 [645/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:26.833 [646/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:26.833 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:26.833 [648/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:26.833 [649/743] Linking static target lib/librte_vhost.a 00:03:27.092 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:27.092 [651/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.350 [652/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:27.350 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:27.350 [654/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:27.609 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:27.609 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:27.868 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:28.127 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.127 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:28.127 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:28.127 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:28.386 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:28.386 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:28.386 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:28.386 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:28.386 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:28.645 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:28.645 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:28.904 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:28.904 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:29.162 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:29.420 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:29.421 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:29.679 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:29.937 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:30.196 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:30.196 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:30.455 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:30.455 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:30.455 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:30.455 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:30.712 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:30.970 [683/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:30.970 [684/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:30.970 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:31.228 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:31.228 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:31.228 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:31.485 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:31.485 [690/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:31.743 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:31.743 [692/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:31.743 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:31.743 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:32.308 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:32.308 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:32.309 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:32.566 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:32.566 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:33.132 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:33.132 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:33.132 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:33.391 [703/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:33.391 [704/743] Linking static target lib/librte_pipeline.a 00:03:33.391 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:33.650 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:33.650 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:33.909 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:33.909 [709/743] Linking target app/dpdk-dumpcap 00:03:33.909 [710/743] Linking target app/dpdk-pdump 00:03:33.909 [711/743] Linking target app/dpdk-proc-info 00:03:34.169 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:34.169 [713/743] Linking target app/dpdk-test-acl 00:03:34.169 [714/743] Linking target app/dpdk-test-bbdev 00:03:34.428 [715/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:34.428 [716/743] Linking target app/dpdk-test-cmdline 00:03:34.428 [717/743] Linking target app/dpdk-test-compress-perf 00:03:34.428 [718/743] Linking target app/dpdk-test-crypto-perf 00:03:34.687 [719/743] Linking target app/dpdk-test-eventdev 00:03:34.687 [720/743] Linking target app/dpdk-test-fib 00:03:34.946 [721/743] Linking target app/dpdk-test-gpudev 00:03:34.946 [722/743] Linking target app/dpdk-test-flow-perf 00:03:34.946 [723/743] Linking target app/dpdk-test-pipeline 00:03:34.946 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:35.205 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:35.464 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:35.464 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:35.464 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:35.722 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:35.722 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:35.979 [731/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.979 [732/743] Linking target lib/librte_pipeline.so.23.0 00:03:35.979 [733/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:36.237 [734/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:36.237 [735/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:36.237 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:36.533 [737/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:36.792 [738/743] Linking target app/dpdk-test-sad 00:03:36.792 [739/743] Linking target app/dpdk-test-regex 00:03:36.792 [740/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:37.051 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:37.311 [742/743] Linking target app/dpdk-testpmd 00:03:37.311 [743/743] Linking target app/dpdk-test-security-perf 00:03:37.311 22:34:51 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:37.311 22:34:51 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:37.311 22:34:51 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:37.311 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:37.311 [0/1] Installing files. 00:03:37.570 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.570 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:37.571 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.832 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:37.833 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.834 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:37.835 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.836 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:37.836 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:37.836 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.098 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.099 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.099 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.099 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.099 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:38.099 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.099 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.100 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.101 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:38.102 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:38.102 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:38.102 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:38.102 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:38.102 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:38.102 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:38.102 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:38.102 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:38.102 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:38.102 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:38.102 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:38.102 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:38.102 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:38.102 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:38.102 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:38.102 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:38.102 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:38.102 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:38.102 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:38.102 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:38.102 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:38.102 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:38.102 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:38.102 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:38.102 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:38.102 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:38.102 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:38.102 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:38.102 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:38.102 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:38.102 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:38.102 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:38.102 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:38.102 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:38.102 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:38.102 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:38.102 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:38.102 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:38.102 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:38.102 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:38.102 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:38.102 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:38.102 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:38.102 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:38.102 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:38.102 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:38.102 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:38.102 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:38.102 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:38.102 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:38.102 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:38.102 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:38.102 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:38.102 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:38.102 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:38.102 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:38.102 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:38.102 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:38.102 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:38.102 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:38.102 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:38.102 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:38.102 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:38.102 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:38.102 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:38.102 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:38.102 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:38.102 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:38.102 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:38.102 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:38.102 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:38.102 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:38.102 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:38.102 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:38.102 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:38.102 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:38.102 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:38.102 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:38.102 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:38.102 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:38.102 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:38.102 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:38.102 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:38.102 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:38.102 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:38.103 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:38.103 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:38.103 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:38.103 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:38.103 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:38.103 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:38.103 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:38.103 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:38.103 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:38.103 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:38.103 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:38.103 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:38.103 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:38.103 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:38.103 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:38.103 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:38.103 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:38.103 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:38.103 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:38.103 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:38.103 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:38.103 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:38.103 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:38.103 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:38.103 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:38.103 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:38.103 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:38.103 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:38.103 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:38.103 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:38.103 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:38.103 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:38.103 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:38.103 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:38.103 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:38.103 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:38.103 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:38.103 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:38.103 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:38.103 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:38.103 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:38.363 22:34:52 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:38.363 22:34:52 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:38.363 00:03:38.363 real 0m51.623s 00:03:38.363 user 6m9.959s 00:03:38.363 sys 0m54.810s 00:03:38.363 22:34:52 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:38.363 ************************************ 00:03:38.363 END TEST build_native_dpdk 00:03:38.363 ************************************ 00:03:38.363 22:34:52 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:38.363 22:34:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:38.363 22:34:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:38.363 22:34:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:38.363 22:34:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:38.363 22:34:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:38.363 22:34:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:38.363 22:34:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:38.363 22:34:52 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:38.363 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:38.622 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.622 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:38.622 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:38.881 Using 'verbs' RDMA provider 00:03:52.017 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:06.923 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:06.923 Creating mk/config.mk...done. 00:04:06.923 Creating mk/cc.flags.mk...done. 00:04:06.923 Type 'make' to build. 00:04:06.923 22:35:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:06.924 22:35:20 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:06.924 22:35:20 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:06.924 22:35:20 -- common/autotest_common.sh@10 -- $ set +x 00:04:06.924 ************************************ 00:04:06.924 START TEST make 00:04:06.924 ************************************ 00:04:06.924 22:35:20 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:06.924 make[1]: Nothing to be done for 'all'. 00:05:03.193 CC lib/ut_mock/mock.o 00:05:03.193 CC lib/ut/ut.o 00:05:03.193 CC lib/log/log.o 00:05:03.193 CC lib/log/log_flags.o 00:05:03.193 CC lib/log/log_deprecated.o 00:05:03.193 LIB libspdk_ut.a 00:05:03.193 LIB libspdk_ut_mock.a 00:05:03.193 LIB libspdk_log.a 00:05:03.193 SO libspdk_ut.so.2.0 00:05:03.193 SO libspdk_ut_mock.so.6.0 00:05:03.193 SO libspdk_log.so.7.0 00:05:03.193 SYMLINK libspdk_ut_mock.so 00:05:03.193 SYMLINK libspdk_ut.so 00:05:03.193 SYMLINK libspdk_log.so 00:05:03.193 CC lib/ioat/ioat.o 00:05:03.193 CC lib/dma/dma.o 00:05:03.193 CXX lib/trace_parser/trace.o 00:05:03.193 CC lib/util/base64.o 00:05:03.193 CC lib/util/cpuset.o 00:05:03.193 CC lib/util/bit_array.o 00:05:03.193 CC lib/util/crc16.o 00:05:03.193 CC lib/util/crc32c.o 00:05:03.193 CC lib/util/crc32.o 00:05:03.193 CC lib/vfio_user/host/vfio_user_pci.o 00:05:03.193 CC lib/vfio_user/host/vfio_user.o 00:05:03.193 CC lib/util/crc32_ieee.o 00:05:03.193 CC lib/util/crc64.o 00:05:03.193 CC lib/util/dif.o 00:05:03.193 LIB libspdk_dma.a 00:05:03.193 CC lib/util/fd.o 00:05:03.193 SO libspdk_dma.so.5.0 00:05:03.193 CC lib/util/fd_group.o 00:05:03.193 CC lib/util/file.o 00:05:03.193 SYMLINK libspdk_dma.so 00:05:03.193 CC lib/util/hexlify.o 00:05:03.193 CC lib/util/iov.o 00:05:03.193 LIB libspdk_ioat.a 00:05:03.193 CC lib/util/math.o 00:05:03.193 CC lib/util/net.o 00:05:03.193 LIB libspdk_vfio_user.a 00:05:03.193 SO libspdk_ioat.so.7.0 00:05:03.193 SO libspdk_vfio_user.so.5.0 00:05:03.193 SYMLINK libspdk_ioat.so 00:05:03.193 SYMLINK libspdk_vfio_user.so 00:05:03.193 CC lib/util/pipe.o 00:05:03.193 CC lib/util/strerror_tls.o 00:05:03.193 CC lib/util/string.o 00:05:03.193 CC lib/util/uuid.o 00:05:03.193 CC lib/util/xor.o 00:05:03.193 CC lib/util/zipf.o 00:05:03.193 CC lib/util/md5.o 00:05:03.193 LIB libspdk_util.a 00:05:03.193 SO libspdk_util.so.10.0 00:05:03.193 SYMLINK libspdk_util.so 00:05:03.193 LIB libspdk_trace_parser.a 00:05:03.193 SO libspdk_trace_parser.so.6.0 00:05:03.193 SYMLINK libspdk_trace_parser.so 00:05:03.193 CC lib/json/json_parse.o 00:05:03.193 CC lib/json/json_util.o 00:05:03.193 CC lib/rdma_provider/common.o 00:05:03.193 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:03.193 CC lib/json/json_write.o 00:05:03.193 CC lib/idxd/idxd.o 00:05:03.193 CC lib/env_dpdk/env.o 00:05:03.193 CC lib/rdma_utils/rdma_utils.o 00:05:03.193 CC lib/conf/conf.o 00:05:03.193 CC lib/vmd/vmd.o 00:05:03.193 CC lib/vmd/led.o 00:05:03.193 LIB libspdk_rdma_provider.a 00:05:03.193 SO libspdk_rdma_provider.so.6.0 00:05:03.193 LIB libspdk_conf.a 00:05:03.193 CC lib/env_dpdk/memory.o 00:05:03.193 CC lib/env_dpdk/pci.o 00:05:03.193 SO libspdk_conf.so.6.0 00:05:03.193 LIB libspdk_rdma_utils.a 00:05:03.193 SYMLINK libspdk_rdma_provider.so 00:05:03.193 LIB libspdk_json.a 00:05:03.193 CC lib/idxd/idxd_user.o 00:05:03.193 SYMLINK libspdk_conf.so 00:05:03.193 SO libspdk_rdma_utils.so.1.0 00:05:03.193 SO libspdk_json.so.6.0 00:05:03.193 CC lib/env_dpdk/init.o 00:05:03.193 SYMLINK libspdk_rdma_utils.so 00:05:03.194 CC lib/env_dpdk/threads.o 00:05:03.194 CC lib/env_dpdk/pci_ioat.o 00:05:03.194 SYMLINK libspdk_json.so 00:05:03.194 CC lib/env_dpdk/pci_virtio.o 00:05:03.194 CC lib/env_dpdk/pci_vmd.o 00:05:03.194 CC lib/env_dpdk/pci_idxd.o 00:05:03.194 CC lib/env_dpdk/pci_event.o 00:05:03.194 CC lib/idxd/idxd_kernel.o 00:05:03.194 CC lib/env_dpdk/sigbus_handler.o 00:05:03.194 CC lib/env_dpdk/pci_dpdk.o 00:05:03.194 LIB libspdk_vmd.a 00:05:03.194 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:03.194 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:03.194 SO libspdk_vmd.so.6.0 00:05:03.194 LIB libspdk_idxd.a 00:05:03.194 CC lib/jsonrpc/jsonrpc_server.o 00:05:03.194 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:03.194 CC lib/jsonrpc/jsonrpc_client.o 00:05:03.194 SYMLINK libspdk_vmd.so 00:05:03.194 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:03.194 SO libspdk_idxd.so.12.1 00:05:03.194 SYMLINK libspdk_idxd.so 00:05:03.194 LIB libspdk_jsonrpc.a 00:05:03.194 SO libspdk_jsonrpc.so.6.0 00:05:03.194 SYMLINK libspdk_jsonrpc.so 00:05:03.194 LIB libspdk_env_dpdk.a 00:05:03.194 CC lib/rpc/rpc.o 00:05:03.194 SO libspdk_env_dpdk.so.15.0 00:05:03.194 SYMLINK libspdk_env_dpdk.so 00:05:03.194 LIB libspdk_rpc.a 00:05:03.194 SO libspdk_rpc.so.6.0 00:05:03.194 SYMLINK libspdk_rpc.so 00:05:03.194 CC lib/trace/trace.o 00:05:03.194 CC lib/keyring/keyring.o 00:05:03.194 CC lib/trace/trace_flags.o 00:05:03.194 CC lib/trace/trace_rpc.o 00:05:03.194 CC lib/keyring/keyring_rpc.o 00:05:03.194 CC lib/notify/notify_rpc.o 00:05:03.194 CC lib/notify/notify.o 00:05:03.194 LIB libspdk_notify.a 00:05:03.194 SO libspdk_notify.so.6.0 00:05:03.194 LIB libspdk_trace.a 00:05:03.194 LIB libspdk_keyring.a 00:05:03.194 SO libspdk_trace.so.11.0 00:05:03.194 SYMLINK libspdk_notify.so 00:05:03.194 SO libspdk_keyring.so.2.0 00:05:03.194 SYMLINK libspdk_trace.so 00:05:03.194 SYMLINK libspdk_keyring.so 00:05:03.194 CC lib/thread/iobuf.o 00:05:03.194 CC lib/thread/thread.o 00:05:03.194 CC lib/sock/sock.o 00:05:03.194 CC lib/sock/sock_rpc.o 00:05:03.194 LIB libspdk_sock.a 00:05:03.194 SO libspdk_sock.so.10.0 00:05:03.194 SYMLINK libspdk_sock.so 00:05:03.452 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:03.452 CC lib/nvme/nvme_ctrlr.o 00:05:03.452 CC lib/nvme/nvme_fabric.o 00:05:03.452 CC lib/nvme/nvme_ns_cmd.o 00:05:03.452 CC lib/nvme/nvme_ns.o 00:05:03.452 CC lib/nvme/nvme_pcie_common.o 00:05:03.452 CC lib/nvme/nvme_pcie.o 00:05:03.452 CC lib/nvme/nvme_qpair.o 00:05:03.452 CC lib/nvme/nvme.o 00:05:04.020 LIB libspdk_thread.a 00:05:04.020 CC lib/nvme/nvme_quirks.o 00:05:04.278 SO libspdk_thread.so.10.1 00:05:04.278 CC lib/nvme/nvme_transport.o 00:05:04.278 SYMLINK libspdk_thread.so 00:05:04.278 CC lib/nvme/nvme_discovery.o 00:05:04.278 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:04.278 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:04.278 CC lib/nvme/nvme_tcp.o 00:05:04.278 CC lib/nvme/nvme_opal.o 00:05:04.540 CC lib/nvme/nvme_io_msg.o 00:05:04.540 CC lib/nvme/nvme_poll_group.o 00:05:04.801 CC lib/nvme/nvme_zns.o 00:05:04.801 CC lib/nvme/nvme_stubs.o 00:05:04.801 CC lib/nvme/nvme_auth.o 00:05:04.801 CC lib/nvme/nvme_cuse.o 00:05:04.801 CC lib/nvme/nvme_rdma.o 00:05:05.368 CC lib/accel/accel.o 00:05:05.368 CC lib/accel/accel_rpc.o 00:05:05.368 CC lib/blob/blobstore.o 00:05:05.368 CC lib/accel/accel_sw.o 00:05:05.626 CC lib/init/json_config.o 00:05:05.626 CC lib/virtio/virtio.o 00:05:05.626 CC lib/virtio/virtio_vhost_user.o 00:05:05.885 CC lib/virtio/virtio_vfio_user.o 00:05:05.885 CC lib/blob/request.o 00:05:05.885 CC lib/virtio/virtio_pci.o 00:05:05.885 CC lib/init/subsystem.o 00:05:05.885 CC lib/fsdev/fsdev.o 00:05:05.885 CC lib/fsdev/fsdev_io.o 00:05:06.144 CC lib/fsdev/fsdev_rpc.o 00:05:06.144 CC lib/blob/zeroes.o 00:05:06.144 CC lib/init/subsystem_rpc.o 00:05:06.144 CC lib/blob/blob_bs_dev.o 00:05:06.144 LIB libspdk_virtio.a 00:05:06.144 CC lib/init/rpc.o 00:05:06.404 SO libspdk_virtio.so.7.0 00:05:06.404 LIB libspdk_accel.a 00:05:06.404 LIB libspdk_nvme.a 00:05:06.404 SYMLINK libspdk_virtio.so 00:05:06.404 SO libspdk_accel.so.16.0 00:05:06.404 LIB libspdk_init.a 00:05:06.404 SYMLINK libspdk_accel.so 00:05:06.404 SO libspdk_init.so.6.0 00:05:06.663 SYMLINK libspdk_init.so 00:05:06.663 SO libspdk_nvme.so.14.0 00:05:06.663 LIB libspdk_fsdev.a 00:05:06.663 CC lib/bdev/bdev.o 00:05:06.663 CC lib/bdev/bdev_rpc.o 00:05:06.663 CC lib/bdev/part.o 00:05:06.663 CC lib/bdev/bdev_zone.o 00:05:06.663 CC lib/bdev/scsi_nvme.o 00:05:06.663 SO libspdk_fsdev.so.1.0 00:05:06.663 CC lib/event/reactor.o 00:05:06.663 CC lib/event/app.o 00:05:06.663 SYMLINK libspdk_fsdev.so 00:05:06.663 CC lib/event/log_rpc.o 00:05:06.663 SYMLINK libspdk_nvme.so 00:05:06.923 CC lib/event/app_rpc.o 00:05:06.923 CC lib/event/scheduler_static.o 00:05:06.923 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:07.183 LIB libspdk_event.a 00:05:07.183 SO libspdk_event.so.14.0 00:05:07.183 SYMLINK libspdk_event.so 00:05:07.752 LIB libspdk_fuse_dispatcher.a 00:05:07.752 SO libspdk_fuse_dispatcher.so.1.0 00:05:07.752 SYMLINK libspdk_fuse_dispatcher.so 00:05:08.320 LIB libspdk_blob.a 00:05:08.320 SO libspdk_blob.so.11.0 00:05:08.320 SYMLINK libspdk_blob.so 00:05:08.578 CC lib/lvol/lvol.o 00:05:08.578 CC lib/blobfs/blobfs.o 00:05:08.578 CC lib/blobfs/tree.o 00:05:09.511 LIB libspdk_bdev.a 00:05:09.511 SO libspdk_bdev.so.16.0 00:05:09.511 SYMLINK libspdk_bdev.so 00:05:09.511 LIB libspdk_blobfs.a 00:05:09.511 SO libspdk_blobfs.so.10.0 00:05:09.511 SYMLINK libspdk_blobfs.so 00:05:09.511 LIB libspdk_lvol.a 00:05:09.770 SO libspdk_lvol.so.10.0 00:05:09.770 CC lib/ftl/ftl_core.o 00:05:09.770 CC lib/ftl/ftl_init.o 00:05:09.770 CC lib/ftl/ftl_layout.o 00:05:09.770 CC lib/ftl/ftl_io.o 00:05:09.770 CC lib/ftl/ftl_debug.o 00:05:09.770 CC lib/scsi/dev.o 00:05:09.770 CC lib/ublk/ublk.o 00:05:09.770 CC lib/nbd/nbd.o 00:05:09.770 CC lib/nvmf/ctrlr.o 00:05:09.770 SYMLINK libspdk_lvol.so 00:05:09.770 CC lib/nbd/nbd_rpc.o 00:05:10.028 CC lib/ublk/ublk_rpc.o 00:05:10.028 CC lib/ftl/ftl_sb.o 00:05:10.028 CC lib/nvmf/ctrlr_discovery.o 00:05:10.028 CC lib/ftl/ftl_l2p.o 00:05:10.028 CC lib/scsi/lun.o 00:05:10.028 CC lib/ftl/ftl_l2p_flat.o 00:05:10.028 LIB libspdk_nbd.a 00:05:10.028 CC lib/scsi/port.o 00:05:10.028 CC lib/scsi/scsi.o 00:05:10.028 SO libspdk_nbd.so.7.0 00:05:10.028 CC lib/nvmf/ctrlr_bdev.o 00:05:10.347 CC lib/nvmf/subsystem.o 00:05:10.347 SYMLINK libspdk_nbd.so 00:05:10.347 CC lib/nvmf/nvmf.o 00:05:10.347 CC lib/ftl/ftl_nv_cache.o 00:05:10.347 CC lib/scsi/scsi_bdev.o 00:05:10.347 CC lib/ftl/ftl_band.o 00:05:10.347 CC lib/scsi/scsi_pr.o 00:05:10.347 LIB libspdk_ublk.a 00:05:10.347 SO libspdk_ublk.so.3.0 00:05:10.634 CC lib/nvmf/nvmf_rpc.o 00:05:10.634 SYMLINK libspdk_ublk.so 00:05:10.634 CC lib/nvmf/transport.o 00:05:10.634 CC lib/nvmf/tcp.o 00:05:10.634 CC lib/nvmf/stubs.o 00:05:10.900 CC lib/scsi/scsi_rpc.o 00:05:10.900 CC lib/nvmf/mdns_server.o 00:05:10.900 CC lib/scsi/task.o 00:05:11.157 CC lib/ftl/ftl_band_ops.o 00:05:11.157 CC lib/ftl/ftl_writer.o 00:05:11.157 CC lib/nvmf/rdma.o 00:05:11.157 LIB libspdk_scsi.a 00:05:11.157 SO libspdk_scsi.so.9.0 00:05:11.157 CC lib/nvmf/auth.o 00:05:11.157 CC lib/ftl/ftl_rq.o 00:05:11.416 CC lib/ftl/ftl_reloc.o 00:05:11.416 SYMLINK libspdk_scsi.so 00:05:11.416 CC lib/ftl/ftl_l2p_cache.o 00:05:11.416 CC lib/ftl/ftl_p2l.o 00:05:11.416 CC lib/ftl/ftl_p2l_log.o 00:05:11.416 CC lib/ftl/mngt/ftl_mngt.o 00:05:11.674 CC lib/iscsi/conn.o 00:05:11.674 CC lib/vhost/vhost.o 00:05:11.674 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:11.675 CC lib/iscsi/init_grp.o 00:05:11.675 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:11.675 CC lib/vhost/vhost_rpc.o 00:05:11.933 CC lib/vhost/vhost_scsi.o 00:05:11.933 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:11.933 CC lib/iscsi/iscsi.o 00:05:11.933 CC lib/iscsi/param.o 00:05:12.191 CC lib/iscsi/portal_grp.o 00:05:12.191 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:12.191 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:12.191 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:12.450 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:12.450 CC lib/iscsi/tgt_node.o 00:05:12.450 CC lib/vhost/vhost_blk.o 00:05:12.450 CC lib/vhost/rte_vhost_user.o 00:05:12.450 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:12.450 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:12.450 CC lib/iscsi/iscsi_subsystem.o 00:05:12.450 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:12.709 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:12.709 CC lib/iscsi/iscsi_rpc.o 00:05:12.709 CC lib/iscsi/task.o 00:05:12.967 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:12.967 CC lib/ftl/utils/ftl_conf.o 00:05:12.967 CC lib/ftl/utils/ftl_md.o 00:05:12.967 CC lib/ftl/utils/ftl_mempool.o 00:05:12.967 CC lib/ftl/utils/ftl_bitmap.o 00:05:13.226 CC lib/ftl/utils/ftl_property.o 00:05:13.226 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:13.226 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:13.226 LIB libspdk_nvmf.a 00:05:13.226 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:13.226 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:13.485 SO libspdk_nvmf.so.19.0 00:05:13.485 LIB libspdk_iscsi.a 00:05:13.485 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:13.485 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:13.485 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:13.485 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:13.485 SO libspdk_iscsi.so.8.0 00:05:13.485 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:13.485 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:13.485 SYMLINK libspdk_nvmf.so 00:05:13.485 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:13.485 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:13.485 LIB libspdk_vhost.a 00:05:13.744 SYMLINK libspdk_iscsi.so 00:05:13.744 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:13.744 SO libspdk_vhost.so.8.0 00:05:13.744 CC lib/ftl/base/ftl_base_dev.o 00:05:13.744 CC lib/ftl/base/ftl_base_bdev.o 00:05:13.744 CC lib/ftl/ftl_trace.o 00:05:13.744 SYMLINK libspdk_vhost.so 00:05:14.004 LIB libspdk_ftl.a 00:05:14.262 SO libspdk_ftl.so.9.0 00:05:14.520 SYMLINK libspdk_ftl.so 00:05:14.779 CC module/env_dpdk/env_dpdk_rpc.o 00:05:14.779 CC module/blob/bdev/blob_bdev.o 00:05:14.779 CC module/fsdev/aio/fsdev_aio.o 00:05:14.779 CC module/keyring/file/keyring.o 00:05:14.779 CC module/sock/posix/posix.o 00:05:14.779 CC module/keyring/linux/keyring.o 00:05:14.780 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:14.780 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:14.780 CC module/sock/uring/uring.o 00:05:14.780 CC module/accel/error/accel_error.o 00:05:14.780 LIB libspdk_env_dpdk_rpc.a 00:05:14.780 SO libspdk_env_dpdk_rpc.so.6.0 00:05:15.037 SYMLINK libspdk_env_dpdk_rpc.so 00:05:15.037 CC module/accel/error/accel_error_rpc.o 00:05:15.037 CC module/keyring/file/keyring_rpc.o 00:05:15.037 CC module/keyring/linux/keyring_rpc.o 00:05:15.037 LIB libspdk_scheduler_dpdk_governor.a 00:05:15.038 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:15.038 LIB libspdk_scheduler_dynamic.a 00:05:15.038 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:15.038 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:15.038 SO libspdk_scheduler_dynamic.so.4.0 00:05:15.038 CC module/fsdev/aio/linux_aio_mgr.o 00:05:15.038 LIB libspdk_blob_bdev.a 00:05:15.038 LIB libspdk_accel_error.a 00:05:15.038 SYMLINK libspdk_scheduler_dynamic.so 00:05:15.038 LIB libspdk_keyring_linux.a 00:05:15.038 LIB libspdk_keyring_file.a 00:05:15.038 SO libspdk_blob_bdev.so.11.0 00:05:15.038 SO libspdk_accel_error.so.2.0 00:05:15.038 SO libspdk_keyring_linux.so.1.0 00:05:15.038 SO libspdk_keyring_file.so.2.0 00:05:15.296 SYMLINK libspdk_blob_bdev.so 00:05:15.296 SYMLINK libspdk_accel_error.so 00:05:15.296 SYMLINK libspdk_keyring_linux.so 00:05:15.296 SYMLINK libspdk_keyring_file.so 00:05:15.296 CC module/scheduler/gscheduler/gscheduler.o 00:05:15.296 CC module/accel/ioat/accel_ioat.o 00:05:15.296 CC module/accel/iaa/accel_iaa.o 00:05:15.296 CC module/accel/dsa/accel_dsa.o 00:05:15.555 CC module/bdev/delay/vbdev_delay.o 00:05:15.555 LIB libspdk_scheduler_gscheduler.a 00:05:15.555 CC module/bdev/error/vbdev_error.o 00:05:15.555 LIB libspdk_fsdev_aio.a 00:05:15.555 CC module/blobfs/bdev/blobfs_bdev.o 00:05:15.555 SO libspdk_scheduler_gscheduler.so.4.0 00:05:15.555 SO libspdk_fsdev_aio.so.1.0 00:05:15.555 LIB libspdk_sock_uring.a 00:05:15.555 LIB libspdk_sock_posix.a 00:05:15.555 SO libspdk_sock_uring.so.5.0 00:05:15.555 SYMLINK libspdk_scheduler_gscheduler.so 00:05:15.555 SO libspdk_sock_posix.so.6.0 00:05:15.555 CC module/accel/ioat/accel_ioat_rpc.o 00:05:15.555 SYMLINK libspdk_fsdev_aio.so 00:05:15.555 CC module/accel/iaa/accel_iaa_rpc.o 00:05:15.555 SYMLINK libspdk_sock_uring.so 00:05:15.555 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:15.555 SYMLINK libspdk_sock_posix.so 00:05:15.814 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:15.814 CC module/accel/dsa/accel_dsa_rpc.o 00:05:15.814 LIB libspdk_accel_ioat.a 00:05:15.814 CC module/bdev/error/vbdev_error_rpc.o 00:05:15.814 LIB libspdk_accel_iaa.a 00:05:15.814 SO libspdk_accel_ioat.so.6.0 00:05:15.814 CC module/bdev/gpt/gpt.o 00:05:15.814 SO libspdk_accel_iaa.so.3.0 00:05:15.814 CC module/bdev/lvol/vbdev_lvol.o 00:05:15.814 LIB libspdk_blobfs_bdev.a 00:05:15.814 CC module/bdev/malloc/bdev_malloc.o 00:05:15.814 SYMLINK libspdk_accel_ioat.so 00:05:15.814 LIB libspdk_accel_dsa.a 00:05:15.814 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:15.814 SO libspdk_blobfs_bdev.so.6.0 00:05:15.814 SYMLINK libspdk_accel_iaa.so 00:05:15.814 CC module/bdev/gpt/vbdev_gpt.o 00:05:15.814 SO libspdk_accel_dsa.so.5.0 00:05:15.814 LIB libspdk_bdev_delay.a 00:05:15.814 SYMLINK libspdk_blobfs_bdev.so 00:05:15.814 SO libspdk_bdev_delay.so.6.0 00:05:16.072 SYMLINK libspdk_accel_dsa.so 00:05:16.072 LIB libspdk_bdev_error.a 00:05:16.072 SO libspdk_bdev_error.so.6.0 00:05:16.072 SYMLINK libspdk_bdev_delay.so 00:05:16.072 CC module/bdev/null/bdev_null.o 00:05:16.072 SYMLINK libspdk_bdev_error.so 00:05:16.072 CC module/bdev/nvme/bdev_nvme.o 00:05:16.072 CC module/bdev/passthru/vbdev_passthru.o 00:05:16.072 LIB libspdk_bdev_gpt.a 00:05:16.072 CC module/bdev/raid/bdev_raid.o 00:05:16.072 CC module/bdev/split/vbdev_split.o 00:05:16.331 SO libspdk_bdev_gpt.so.6.0 00:05:16.331 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:16.331 CC module/bdev/uring/bdev_uring.o 00:05:16.331 LIB libspdk_bdev_malloc.a 00:05:16.331 SYMLINK libspdk_bdev_gpt.so 00:05:16.331 CC module/bdev/raid/bdev_raid_rpc.o 00:05:16.331 SO libspdk_bdev_malloc.so.6.0 00:05:16.331 SYMLINK libspdk_bdev_malloc.so 00:05:16.331 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:16.331 CC module/bdev/null/bdev_null_rpc.o 00:05:16.331 CC module/bdev/raid/bdev_raid_sb.o 00:05:16.331 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:16.588 CC module/bdev/split/vbdev_split_rpc.o 00:05:16.588 CC module/bdev/raid/raid0.o 00:05:16.588 LIB libspdk_bdev_null.a 00:05:16.588 SO libspdk_bdev_null.so.6.0 00:05:16.588 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:16.588 LIB libspdk_bdev_passthru.a 00:05:16.588 CC module/bdev/uring/bdev_uring_rpc.o 00:05:16.588 LIB libspdk_bdev_split.a 00:05:16.588 SYMLINK libspdk_bdev_null.so 00:05:16.588 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:16.588 CC module/bdev/raid/raid1.o 00:05:16.588 SO libspdk_bdev_passthru.so.6.0 00:05:16.588 SO libspdk_bdev_split.so.6.0 00:05:16.845 LIB libspdk_bdev_lvol.a 00:05:16.846 SYMLINK libspdk_bdev_passthru.so 00:05:16.846 SYMLINK libspdk_bdev_split.so 00:05:16.846 CC module/bdev/nvme/nvme_rpc.o 00:05:16.846 CC module/bdev/nvme/bdev_mdns_client.o 00:05:16.846 SO libspdk_bdev_lvol.so.6.0 00:05:16.846 LIB libspdk_bdev_zone_block.a 00:05:16.846 CC module/bdev/raid/concat.o 00:05:16.846 SO libspdk_bdev_zone_block.so.6.0 00:05:16.846 LIB libspdk_bdev_uring.a 00:05:16.846 SYMLINK libspdk_bdev_lvol.so 00:05:16.846 CC module/bdev/nvme/vbdev_opal.o 00:05:16.846 SO libspdk_bdev_uring.so.6.0 00:05:16.846 SYMLINK libspdk_bdev_zone_block.so 00:05:16.846 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:16.846 SYMLINK libspdk_bdev_uring.so 00:05:16.846 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:17.104 CC module/bdev/aio/bdev_aio.o 00:05:17.104 CC module/bdev/aio/bdev_aio_rpc.o 00:05:17.104 CC module/bdev/ftl/bdev_ftl.o 00:05:17.104 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:17.104 CC module/bdev/iscsi/bdev_iscsi.o 00:05:17.104 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:17.104 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:17.104 LIB libspdk_bdev_raid.a 00:05:17.362 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:17.362 SO libspdk_bdev_raid.so.6.0 00:05:17.362 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:17.362 SYMLINK libspdk_bdev_raid.so 00:05:17.362 LIB libspdk_bdev_ftl.a 00:05:17.362 LIB libspdk_bdev_aio.a 00:05:17.621 SO libspdk_bdev_ftl.so.6.0 00:05:17.621 SO libspdk_bdev_aio.so.6.0 00:05:17.621 SYMLINK libspdk_bdev_ftl.so 00:05:17.621 SYMLINK libspdk_bdev_aio.so 00:05:17.621 LIB libspdk_bdev_iscsi.a 00:05:17.621 SO libspdk_bdev_iscsi.so.6.0 00:05:17.621 SYMLINK libspdk_bdev_iscsi.so 00:05:17.621 LIB libspdk_bdev_virtio.a 00:05:17.879 SO libspdk_bdev_virtio.so.6.0 00:05:17.879 SYMLINK libspdk_bdev_virtio.so 00:05:18.447 LIB libspdk_bdev_nvme.a 00:05:18.447 SO libspdk_bdev_nvme.so.7.0 00:05:18.447 SYMLINK libspdk_bdev_nvme.so 00:05:19.013 CC module/event/subsystems/vmd/vmd.o 00:05:19.014 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:19.014 CC module/event/subsystems/iobuf/iobuf.o 00:05:19.014 CC module/event/subsystems/fsdev/fsdev.o 00:05:19.014 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:19.014 CC module/event/subsystems/scheduler/scheduler.o 00:05:19.014 CC module/event/subsystems/keyring/keyring.o 00:05:19.014 CC module/event/subsystems/sock/sock.o 00:05:19.014 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:19.273 LIB libspdk_event_vmd.a 00:05:19.273 LIB libspdk_event_keyring.a 00:05:19.273 LIB libspdk_event_scheduler.a 00:05:19.273 LIB libspdk_event_fsdev.a 00:05:19.273 LIB libspdk_event_sock.a 00:05:19.273 LIB libspdk_event_vhost_blk.a 00:05:19.273 LIB libspdk_event_iobuf.a 00:05:19.273 SO libspdk_event_vmd.so.6.0 00:05:19.273 SO libspdk_event_fsdev.so.1.0 00:05:19.273 SO libspdk_event_scheduler.so.4.0 00:05:19.273 SO libspdk_event_sock.so.5.0 00:05:19.273 SO libspdk_event_keyring.so.1.0 00:05:19.273 SO libspdk_event_vhost_blk.so.3.0 00:05:19.273 SO libspdk_event_iobuf.so.3.0 00:05:19.273 SYMLINK libspdk_event_vmd.so 00:05:19.273 SYMLINK libspdk_event_scheduler.so 00:05:19.273 SYMLINK libspdk_event_fsdev.so 00:05:19.273 SYMLINK libspdk_event_sock.so 00:05:19.273 SYMLINK libspdk_event_vhost_blk.so 00:05:19.273 SYMLINK libspdk_event_keyring.so 00:05:19.273 SYMLINK libspdk_event_iobuf.so 00:05:19.532 CC module/event/subsystems/accel/accel.o 00:05:19.791 LIB libspdk_event_accel.a 00:05:19.791 SO libspdk_event_accel.so.6.0 00:05:19.791 SYMLINK libspdk_event_accel.so 00:05:20.051 CC module/event/subsystems/bdev/bdev.o 00:05:20.310 LIB libspdk_event_bdev.a 00:05:20.310 SO libspdk_event_bdev.so.6.0 00:05:20.310 SYMLINK libspdk_event_bdev.so 00:05:20.585 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:20.585 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:20.585 CC module/event/subsystems/nbd/nbd.o 00:05:20.585 CC module/event/subsystems/ublk/ublk.o 00:05:20.585 CC module/event/subsystems/scsi/scsi.o 00:05:20.844 LIB libspdk_event_scsi.a 00:05:20.844 LIB libspdk_event_nbd.a 00:05:20.844 LIB libspdk_event_ublk.a 00:05:20.844 SO libspdk_event_scsi.so.6.0 00:05:20.844 SO libspdk_event_nbd.so.6.0 00:05:20.844 SO libspdk_event_ublk.so.3.0 00:05:20.844 LIB libspdk_event_nvmf.a 00:05:20.844 SYMLINK libspdk_event_scsi.so 00:05:20.844 SYMLINK libspdk_event_nbd.so 00:05:20.844 SYMLINK libspdk_event_ublk.so 00:05:20.844 SO libspdk_event_nvmf.so.6.0 00:05:20.844 SYMLINK libspdk_event_nvmf.so 00:05:21.101 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:21.101 CC module/event/subsystems/iscsi/iscsi.o 00:05:21.101 LIB libspdk_event_vhost_scsi.a 00:05:21.359 LIB libspdk_event_iscsi.a 00:05:21.359 SO libspdk_event_vhost_scsi.so.3.0 00:05:21.359 SO libspdk_event_iscsi.so.6.0 00:05:21.359 SYMLINK libspdk_event_vhost_scsi.so 00:05:21.359 SYMLINK libspdk_event_iscsi.so 00:05:21.618 SO libspdk.so.6.0 00:05:21.618 SYMLINK libspdk.so 00:05:21.618 CC test/rpc_client/rpc_client_test.o 00:05:21.618 TEST_HEADER include/spdk/accel.h 00:05:21.618 TEST_HEADER include/spdk/accel_module.h 00:05:21.618 TEST_HEADER include/spdk/assert.h 00:05:21.618 CXX app/trace/trace.o 00:05:21.618 TEST_HEADER include/spdk/barrier.h 00:05:21.618 TEST_HEADER include/spdk/base64.h 00:05:21.618 TEST_HEADER include/spdk/bdev.h 00:05:21.618 TEST_HEADER include/spdk/bdev_module.h 00:05:21.877 TEST_HEADER include/spdk/bdev_zone.h 00:05:21.877 TEST_HEADER include/spdk/bit_array.h 00:05:21.877 TEST_HEADER include/spdk/bit_pool.h 00:05:21.877 TEST_HEADER include/spdk/blob_bdev.h 00:05:21.877 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:21.877 TEST_HEADER include/spdk/blobfs.h 00:05:21.877 TEST_HEADER include/spdk/blob.h 00:05:21.877 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:21.877 TEST_HEADER include/spdk/conf.h 00:05:21.877 TEST_HEADER include/spdk/config.h 00:05:21.877 TEST_HEADER include/spdk/cpuset.h 00:05:21.877 TEST_HEADER include/spdk/crc16.h 00:05:21.877 TEST_HEADER include/spdk/crc32.h 00:05:21.877 TEST_HEADER include/spdk/crc64.h 00:05:21.877 TEST_HEADER include/spdk/dif.h 00:05:21.877 TEST_HEADER include/spdk/dma.h 00:05:21.877 TEST_HEADER include/spdk/endian.h 00:05:21.877 TEST_HEADER include/spdk/env_dpdk.h 00:05:21.877 TEST_HEADER include/spdk/env.h 00:05:21.877 TEST_HEADER include/spdk/event.h 00:05:21.877 TEST_HEADER include/spdk/fd_group.h 00:05:21.877 TEST_HEADER include/spdk/fd.h 00:05:21.877 TEST_HEADER include/spdk/file.h 00:05:21.877 TEST_HEADER include/spdk/fsdev.h 00:05:21.877 TEST_HEADER include/spdk/fsdev_module.h 00:05:21.877 TEST_HEADER include/spdk/ftl.h 00:05:21.877 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:21.877 TEST_HEADER include/spdk/gpt_spec.h 00:05:21.877 TEST_HEADER include/spdk/hexlify.h 00:05:21.877 CC test/thread/poller_perf/poller_perf.o 00:05:21.877 TEST_HEADER include/spdk/histogram_data.h 00:05:21.877 TEST_HEADER include/spdk/idxd.h 00:05:21.877 TEST_HEADER include/spdk/idxd_spec.h 00:05:21.877 TEST_HEADER include/spdk/init.h 00:05:21.877 CC examples/util/zipf/zipf.o 00:05:21.877 TEST_HEADER include/spdk/ioat.h 00:05:21.877 TEST_HEADER include/spdk/ioat_spec.h 00:05:21.877 TEST_HEADER include/spdk/iscsi_spec.h 00:05:21.877 CC examples/ioat/perf/perf.o 00:05:21.877 TEST_HEADER include/spdk/json.h 00:05:21.877 TEST_HEADER include/spdk/jsonrpc.h 00:05:21.877 TEST_HEADER include/spdk/keyring.h 00:05:21.877 TEST_HEADER include/spdk/keyring_module.h 00:05:21.877 TEST_HEADER include/spdk/likely.h 00:05:21.877 TEST_HEADER include/spdk/log.h 00:05:21.877 TEST_HEADER include/spdk/lvol.h 00:05:21.877 TEST_HEADER include/spdk/md5.h 00:05:21.877 CC test/app/bdev_svc/bdev_svc.o 00:05:21.877 CC test/dma/test_dma/test_dma.o 00:05:21.877 TEST_HEADER include/spdk/memory.h 00:05:21.877 TEST_HEADER include/spdk/mmio.h 00:05:21.877 TEST_HEADER include/spdk/nbd.h 00:05:21.877 TEST_HEADER include/spdk/net.h 00:05:21.877 TEST_HEADER include/spdk/notify.h 00:05:21.877 TEST_HEADER include/spdk/nvme.h 00:05:21.877 TEST_HEADER include/spdk/nvme_intel.h 00:05:21.877 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:21.877 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:21.877 TEST_HEADER include/spdk/nvme_spec.h 00:05:21.877 TEST_HEADER include/spdk/nvme_zns.h 00:05:21.877 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:21.877 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:21.877 TEST_HEADER include/spdk/nvmf.h 00:05:21.877 TEST_HEADER include/spdk/nvmf_spec.h 00:05:21.877 TEST_HEADER include/spdk/nvmf_transport.h 00:05:21.877 TEST_HEADER include/spdk/opal.h 00:05:21.877 TEST_HEADER include/spdk/opal_spec.h 00:05:21.877 TEST_HEADER include/spdk/pci_ids.h 00:05:21.877 TEST_HEADER include/spdk/pipe.h 00:05:21.877 TEST_HEADER include/spdk/queue.h 00:05:21.877 TEST_HEADER include/spdk/reduce.h 00:05:21.877 TEST_HEADER include/spdk/rpc.h 00:05:21.877 TEST_HEADER include/spdk/scheduler.h 00:05:21.877 CC test/env/mem_callbacks/mem_callbacks.o 00:05:21.877 TEST_HEADER include/spdk/scsi.h 00:05:21.877 TEST_HEADER include/spdk/scsi_spec.h 00:05:21.877 TEST_HEADER include/spdk/sock.h 00:05:21.877 TEST_HEADER include/spdk/stdinc.h 00:05:21.877 LINK rpc_client_test 00:05:21.877 TEST_HEADER include/spdk/string.h 00:05:21.877 TEST_HEADER include/spdk/thread.h 00:05:21.877 TEST_HEADER include/spdk/trace.h 00:05:21.877 TEST_HEADER include/spdk/trace_parser.h 00:05:21.877 TEST_HEADER include/spdk/tree.h 00:05:21.877 TEST_HEADER include/spdk/ublk.h 00:05:22.136 TEST_HEADER include/spdk/util.h 00:05:22.136 TEST_HEADER include/spdk/uuid.h 00:05:22.136 TEST_HEADER include/spdk/version.h 00:05:22.136 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:22.136 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:22.136 TEST_HEADER include/spdk/vhost.h 00:05:22.136 TEST_HEADER include/spdk/vmd.h 00:05:22.136 LINK poller_perf 00:05:22.136 TEST_HEADER include/spdk/xor.h 00:05:22.136 TEST_HEADER include/spdk/zipf.h 00:05:22.136 CXX test/cpp_headers/accel.o 00:05:22.136 LINK interrupt_tgt 00:05:22.136 LINK zipf 00:05:22.136 LINK ioat_perf 00:05:22.136 LINK bdev_svc 00:05:22.136 LINK mem_callbacks 00:05:22.136 LINK spdk_trace 00:05:22.136 CXX test/cpp_headers/accel_module.o 00:05:22.136 CXX test/cpp_headers/assert.o 00:05:22.136 CC test/env/vtophys/vtophys.o 00:05:22.395 CC test/app/histogram_perf/histogram_perf.o 00:05:22.395 CC examples/ioat/verify/verify.o 00:05:22.395 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:22.395 CXX test/cpp_headers/barrier.o 00:05:22.395 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:22.395 CXX test/cpp_headers/base64.o 00:05:22.395 LINK vtophys 00:05:22.395 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:22.395 CC app/trace_record/trace_record.o 00:05:22.395 LINK test_dma 00:05:22.395 LINK histogram_perf 00:05:22.654 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:22.654 LINK verify 00:05:22.654 CXX test/cpp_headers/bdev.o 00:05:22.654 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:22.654 LINK spdk_trace_record 00:05:22.932 CXX test/cpp_headers/bdev_module.o 00:05:22.933 CC test/env/memory/memory_ut.o 00:05:22.933 LINK nvme_fuzz 00:05:22.933 CC examples/thread/thread/thread_ex.o 00:05:22.933 CC examples/sock/hello_world/hello_sock.o 00:05:22.933 LINK env_dpdk_post_init 00:05:22.933 CC examples/vmd/lsvmd/lsvmd.o 00:05:22.933 CXX test/cpp_headers/bdev_zone.o 00:05:22.933 LINK vhost_fuzz 00:05:22.933 LINK lsvmd 00:05:22.933 CC app/nvmf_tgt/nvmf_main.o 00:05:23.191 CC examples/vmd/led/led.o 00:05:23.191 LINK hello_sock 00:05:23.191 LINK thread 00:05:23.191 CC examples/idxd/perf/perf.o 00:05:23.191 CXX test/cpp_headers/bit_array.o 00:05:23.191 CXX test/cpp_headers/bit_pool.o 00:05:23.191 LINK led 00:05:23.191 LINK nvmf_tgt 00:05:23.191 CC test/app/jsoncat/jsoncat.o 00:05:23.191 CC test/app/stub/stub.o 00:05:23.451 CXX test/cpp_headers/blob_bdev.o 00:05:23.451 CXX test/cpp_headers/blobfs_bdev.o 00:05:23.451 LINK jsoncat 00:05:23.451 CC examples/nvme/hello_world/hello_world.o 00:05:23.451 LINK stub 00:05:23.451 LINK idxd_perf 00:05:23.451 LINK memory_ut 00:05:23.451 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:23.710 CC app/iscsi_tgt/iscsi_tgt.o 00:05:23.710 CXX test/cpp_headers/blobfs.o 00:05:23.710 CXX test/cpp_headers/blob.o 00:05:23.710 CC examples/nvme/reconnect/reconnect.o 00:05:23.710 CC test/env/pci/pci_ut.o 00:05:23.710 LINK hello_world 00:05:23.710 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:23.710 LINK iscsi_tgt 00:05:23.970 CXX test/cpp_headers/conf.o 00:05:23.970 LINK hello_fsdev 00:05:23.970 CC app/spdk_tgt/spdk_tgt.o 00:05:23.970 CC examples/nvme/arbitration/arbitration.o 00:05:23.970 CC examples/nvme/hotplug/hotplug.o 00:05:23.970 CXX test/cpp_headers/config.o 00:05:23.970 LINK reconnect 00:05:23.970 CXX test/cpp_headers/cpuset.o 00:05:23.970 LINK pci_ut 00:05:24.229 LINK iscsi_fuzz 00:05:24.229 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:24.229 LINK spdk_tgt 00:05:24.229 CC examples/accel/perf/accel_perf.o 00:05:24.229 CXX test/cpp_headers/crc16.o 00:05:24.229 LINK hotplug 00:05:24.229 LINK arbitration 00:05:24.229 LINK nvme_manage 00:05:24.229 CXX test/cpp_headers/crc32.o 00:05:24.229 CC examples/nvme/abort/abort.o 00:05:24.229 LINK cmb_copy 00:05:24.488 CC app/spdk_lspci/spdk_lspci.o 00:05:24.488 CXX test/cpp_headers/crc64.o 00:05:24.488 CC app/spdk_nvme_perf/perf.o 00:05:24.488 CXX test/cpp_headers/dif.o 00:05:24.488 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:24.488 CC test/event/event_perf/event_perf.o 00:05:24.747 CC test/nvme/aer/aer.o 00:05:24.747 CC examples/blob/hello_world/hello_blob.o 00:05:24.747 LINK spdk_lspci 00:05:24.747 LINK event_perf 00:05:24.747 CXX test/cpp_headers/dma.o 00:05:24.747 LINK pmr_persistence 00:05:24.747 LINK abort 00:05:24.747 LINK accel_perf 00:05:24.747 CC test/event/reactor/reactor.o 00:05:24.747 CXX test/cpp_headers/endian.o 00:05:24.747 CXX test/cpp_headers/env_dpdk.o 00:05:25.007 CXX test/cpp_headers/env.o 00:05:25.007 LINK hello_blob 00:05:25.007 LINK aer 00:05:25.007 LINK reactor 00:05:25.007 CC test/event/reactor_perf/reactor_perf.o 00:05:25.007 CC examples/blob/cli/blobcli.o 00:05:25.007 CXX test/cpp_headers/event.o 00:05:25.007 CC test/event/app_repeat/app_repeat.o 00:05:25.007 CC test/nvme/reset/reset.o 00:05:25.007 CC test/event/scheduler/scheduler.o 00:05:25.266 CC app/spdk_nvme_identify/identify.o 00:05:25.266 LINK reactor_perf 00:05:25.266 CC test/nvme/sgl/sgl.o 00:05:25.266 CC test/nvme/e2edp/nvme_dp.o 00:05:25.266 LINK app_repeat 00:05:25.266 CXX test/cpp_headers/fd_group.o 00:05:25.266 LINK spdk_nvme_perf 00:05:25.266 LINK scheduler 00:05:25.525 LINK reset 00:05:25.525 CC test/nvme/overhead/overhead.o 00:05:25.525 CXX test/cpp_headers/fd.o 00:05:25.525 LINK sgl 00:05:25.525 CC test/nvme/err_injection/err_injection.o 00:05:25.525 LINK nvme_dp 00:05:25.525 LINK blobcli 00:05:25.525 CXX test/cpp_headers/file.o 00:05:25.525 CC app/spdk_nvme_discover/discovery_aer.o 00:05:25.784 CC test/nvme/startup/startup.o 00:05:25.784 LINK overhead 00:05:25.784 LINK err_injection 00:05:25.784 CC app/spdk_top/spdk_top.o 00:05:25.784 CC test/accel/dif/dif.o 00:05:25.784 CC test/nvme/reserve/reserve.o 00:05:25.784 CXX test/cpp_headers/fsdev.o 00:05:25.784 LINK spdk_nvme_discover 00:05:25.784 CXX test/cpp_headers/fsdev_module.o 00:05:25.784 LINK startup 00:05:26.043 CC test/nvme/simple_copy/simple_copy.o 00:05:26.043 CC examples/bdev/hello_world/hello_bdev.o 00:05:26.043 LINK reserve 00:05:26.043 LINK spdk_nvme_identify 00:05:26.043 CXX test/cpp_headers/ftl.o 00:05:26.043 CC test/nvme/connect_stress/connect_stress.o 00:05:26.043 CC test/nvme/boot_partition/boot_partition.o 00:05:26.043 CXX test/cpp_headers/fuse_dispatcher.o 00:05:26.043 CC test/nvme/compliance/nvme_compliance.o 00:05:26.302 LINK simple_copy 00:05:26.302 LINK hello_bdev 00:05:26.302 CC test/nvme/fused_ordering/fused_ordering.o 00:05:26.302 LINK connect_stress 00:05:26.302 LINK boot_partition 00:05:26.302 CXX test/cpp_headers/gpt_spec.o 00:05:26.302 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:26.302 CXX test/cpp_headers/hexlify.o 00:05:26.302 LINK dif 00:05:26.302 CXX test/cpp_headers/histogram_data.o 00:05:26.560 CXX test/cpp_headers/idxd.o 00:05:26.560 LINK fused_ordering 00:05:26.560 LINK nvme_compliance 00:05:26.560 LINK doorbell_aers 00:05:26.560 CC examples/bdev/bdevperf/bdevperf.o 00:05:26.560 CXX test/cpp_headers/idxd_spec.o 00:05:26.560 LINK spdk_top 00:05:26.560 CXX test/cpp_headers/init.o 00:05:26.560 CC app/spdk_dd/spdk_dd.o 00:05:26.560 CXX test/cpp_headers/ioat.o 00:05:26.560 CC app/vhost/vhost.o 00:05:26.818 CC test/nvme/fdp/fdp.o 00:05:26.818 CXX test/cpp_headers/ioat_spec.o 00:05:26.818 CC app/fio/nvme/fio_plugin.o 00:05:26.818 CC app/fio/bdev/fio_plugin.o 00:05:26.818 CXX test/cpp_headers/iscsi_spec.o 00:05:26.818 LINK vhost 00:05:26.818 CXX test/cpp_headers/json.o 00:05:27.077 CC test/nvme/cuse/cuse.o 00:05:27.077 CC test/blobfs/mkfs/mkfs.o 00:05:27.077 CXX test/cpp_headers/jsonrpc.o 00:05:27.077 LINK fdp 00:05:27.077 LINK spdk_dd 00:05:27.334 LINK mkfs 00:05:27.334 CXX test/cpp_headers/keyring.o 00:05:27.334 CXX test/cpp_headers/keyring_module.o 00:05:27.334 CC test/lvol/esnap/esnap.o 00:05:27.334 CXX test/cpp_headers/likely.o 00:05:27.334 LINK spdk_bdev 00:05:27.334 CC test/bdev/bdevio/bdevio.o 00:05:27.334 LINK spdk_nvme 00:05:27.334 LINK bdevperf 00:05:27.334 CXX test/cpp_headers/log.o 00:05:27.334 CXX test/cpp_headers/lvol.o 00:05:27.334 CXX test/cpp_headers/md5.o 00:05:27.334 CXX test/cpp_headers/memory.o 00:05:27.334 CXX test/cpp_headers/mmio.o 00:05:27.592 CXX test/cpp_headers/nbd.o 00:05:27.592 CXX test/cpp_headers/net.o 00:05:27.592 CXX test/cpp_headers/notify.o 00:05:27.592 CXX test/cpp_headers/nvme.o 00:05:27.592 CXX test/cpp_headers/nvme_intel.o 00:05:27.592 CXX test/cpp_headers/nvme_ocssd.o 00:05:27.592 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:27.592 CXX test/cpp_headers/nvme_spec.o 00:05:27.850 LINK bdevio 00:05:27.850 CXX test/cpp_headers/nvme_zns.o 00:05:27.850 CXX test/cpp_headers/nvmf_cmd.o 00:05:27.850 CC examples/nvmf/nvmf/nvmf.o 00:05:27.850 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:27.850 CXX test/cpp_headers/nvmf.o 00:05:27.850 CXX test/cpp_headers/nvmf_spec.o 00:05:27.850 CXX test/cpp_headers/nvmf_transport.o 00:05:27.850 CXX test/cpp_headers/opal.o 00:05:27.850 CXX test/cpp_headers/opal_spec.o 00:05:28.108 CXX test/cpp_headers/pci_ids.o 00:05:28.108 CXX test/cpp_headers/pipe.o 00:05:28.108 CXX test/cpp_headers/queue.o 00:05:28.108 CXX test/cpp_headers/reduce.o 00:05:28.108 CXX test/cpp_headers/rpc.o 00:05:28.108 CXX test/cpp_headers/scheduler.o 00:05:28.108 LINK nvmf 00:05:28.108 CXX test/cpp_headers/scsi.o 00:05:28.108 CXX test/cpp_headers/scsi_spec.o 00:05:28.108 CXX test/cpp_headers/sock.o 00:05:28.108 CXX test/cpp_headers/stdinc.o 00:05:28.108 CXX test/cpp_headers/string.o 00:05:28.367 CXX test/cpp_headers/thread.o 00:05:28.367 LINK cuse 00:05:28.367 CXX test/cpp_headers/trace.o 00:05:28.367 CXX test/cpp_headers/trace_parser.o 00:05:28.367 CXX test/cpp_headers/tree.o 00:05:28.367 CXX test/cpp_headers/ublk.o 00:05:28.367 CXX test/cpp_headers/util.o 00:05:28.367 CXX test/cpp_headers/uuid.o 00:05:28.367 CXX test/cpp_headers/version.o 00:05:28.367 CXX test/cpp_headers/vfio_user_pci.o 00:05:28.367 CXX test/cpp_headers/vfio_user_spec.o 00:05:28.367 CXX test/cpp_headers/vhost.o 00:05:28.367 CXX test/cpp_headers/vmd.o 00:05:28.367 CXX test/cpp_headers/xor.o 00:05:28.626 CXX test/cpp_headers/zipf.o 00:05:31.929 LINK esnap 00:05:32.189 00:05:32.189 real 1m26.778s 00:05:32.189 user 6m56.873s 00:05:32.189 sys 1m7.867s 00:05:32.189 22:36:46 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:32.189 22:36:46 make -- common/autotest_common.sh@10 -- $ set +x 00:05:32.189 ************************************ 00:05:32.189 END TEST make 00:05:32.189 ************************************ 00:05:32.189 22:36:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:32.189 22:36:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:32.189 22:36:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:32.189 22:36:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.189 22:36:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:32.189 22:36:46 -- pm/common@44 -- $ pid=6023 00:05:32.189 22:36:46 -- pm/common@50 -- $ kill -TERM 6023 00:05:32.189 22:36:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.189 22:36:46 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:32.189 22:36:46 -- pm/common@44 -- $ pid=6025 00:05:32.189 22:36:46 -- pm/common@50 -- $ kill -TERM 6025 00:05:32.189 22:36:46 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.189 22:36:46 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.189 22:36:46 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.449 22:36:47 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.449 22:36:47 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.449 22:36:47 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.449 22:36:47 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.449 22:36:47 -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.449 22:36:47 -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.449 22:36:47 -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.449 22:36:47 -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.449 22:36:47 -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.449 22:36:47 -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.449 22:36:47 -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.449 22:36:47 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.449 22:36:47 -- scripts/common.sh@344 -- # case "$op" in 00:05:32.449 22:36:47 -- scripts/common.sh@345 -- # : 1 00:05:32.449 22:36:47 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.449 22:36:47 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.449 22:36:47 -- scripts/common.sh@365 -- # decimal 1 00:05:32.449 22:36:47 -- scripts/common.sh@353 -- # local d=1 00:05:32.449 22:36:47 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.449 22:36:47 -- scripts/common.sh@355 -- # echo 1 00:05:32.449 22:36:47 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.449 22:36:47 -- scripts/common.sh@366 -- # decimal 2 00:05:32.449 22:36:47 -- scripts/common.sh@353 -- # local d=2 00:05:32.449 22:36:47 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.449 22:36:47 -- scripts/common.sh@355 -- # echo 2 00:05:32.449 22:36:47 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.449 22:36:47 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.449 22:36:47 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.450 22:36:47 -- scripts/common.sh@368 -- # return 0 00:05:32.450 22:36:47 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.450 22:36:47 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.450 --rc genhtml_branch_coverage=1 00:05:32.450 --rc genhtml_function_coverage=1 00:05:32.450 --rc genhtml_legend=1 00:05:32.450 --rc geninfo_all_blocks=1 00:05:32.450 --rc geninfo_unexecuted_blocks=1 00:05:32.450 00:05:32.450 ' 00:05:32.450 22:36:47 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.450 --rc genhtml_branch_coverage=1 00:05:32.450 --rc genhtml_function_coverage=1 00:05:32.450 --rc genhtml_legend=1 00:05:32.450 --rc geninfo_all_blocks=1 00:05:32.450 --rc geninfo_unexecuted_blocks=1 00:05:32.450 00:05:32.450 ' 00:05:32.450 22:36:47 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.450 --rc genhtml_branch_coverage=1 00:05:32.450 --rc genhtml_function_coverage=1 00:05:32.450 --rc genhtml_legend=1 00:05:32.450 --rc geninfo_all_blocks=1 00:05:32.450 --rc geninfo_unexecuted_blocks=1 00:05:32.450 00:05:32.450 ' 00:05:32.450 22:36:47 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.450 --rc genhtml_branch_coverage=1 00:05:32.450 --rc genhtml_function_coverage=1 00:05:32.450 --rc genhtml_legend=1 00:05:32.450 --rc geninfo_all_blocks=1 00:05:32.450 --rc geninfo_unexecuted_blocks=1 00:05:32.450 00:05:32.450 ' 00:05:32.450 22:36:47 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:32.450 22:36:47 -- nvmf/common.sh@7 -- # uname -s 00:05:32.450 22:36:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.450 22:36:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.450 22:36:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.450 22:36:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.450 22:36:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.450 22:36:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.450 22:36:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.450 22:36:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.450 22:36:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.450 22:36:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.450 22:36:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:05:32.450 22:36:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:05:32.450 22:36:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.450 22:36:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.450 22:36:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:32.450 22:36:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.450 22:36:47 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:32.450 22:36:47 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:32.450 22:36:47 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.450 22:36:47 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.450 22:36:47 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.450 22:36:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.450 22:36:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.450 22:36:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.450 22:36:47 -- paths/export.sh@5 -- # export PATH 00:05:32.450 22:36:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.450 22:36:47 -- nvmf/common.sh@51 -- # : 0 00:05:32.450 22:36:47 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:32.450 22:36:47 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:32.450 22:36:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.450 22:36:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.450 22:36:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.450 22:36:47 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:32.450 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:32.450 22:36:47 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:32.450 22:36:47 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:32.450 22:36:47 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:32.450 22:36:47 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:32.450 22:36:47 -- spdk/autotest.sh@32 -- # uname -s 00:05:32.450 22:36:47 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:32.450 22:36:47 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:32.450 22:36:47 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:32.450 22:36:47 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:32.450 22:36:47 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:32.450 22:36:47 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:32.450 22:36:47 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:32.450 22:36:47 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:32.450 22:36:47 -- spdk/autotest.sh@48 -- # udevadm_pid=66624 00:05:32.450 22:36:47 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:32.450 22:36:47 -- pm/common@17 -- # local monitor 00:05:32.450 22:36:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.450 22:36:47 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:32.450 22:36:47 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:32.450 22:36:47 -- pm/common@25 -- # sleep 1 00:05:32.450 22:36:47 -- pm/common@21 -- # date +%s 00:05:32.450 22:36:47 -- pm/common@21 -- # date +%s 00:05:32.450 22:36:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733611007 00:05:32.450 22:36:47 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733611007 00:05:32.450 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733611007_collect-cpu-load.pm.log 00:05:32.450 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733611007_collect-vmstat.pm.log 00:05:33.389 22:36:48 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:33.389 22:36:48 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:33.389 22:36:48 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:33.389 22:36:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.389 22:36:48 -- spdk/autotest.sh@59 -- # create_test_list 00:05:33.389 22:36:48 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:33.389 22:36:48 -- common/autotest_common.sh@10 -- # set +x 00:05:33.647 22:36:48 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:33.647 22:36:48 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:33.647 22:36:48 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:33.647 22:36:48 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:33.647 22:36:48 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:33.647 22:36:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:33.647 22:36:48 -- common/autotest_common.sh@1455 -- # uname 00:05:33.647 22:36:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:33.647 22:36:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:33.647 22:36:48 -- common/autotest_common.sh@1475 -- # uname 00:05:33.647 22:36:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:33.647 22:36:48 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:33.647 22:36:48 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:33.647 lcov: LCOV version 1.15 00:05:33.647 22:36:48 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:48.525 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:48.525 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:03.417 22:37:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:03.417 22:37:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:03.417 22:37:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.417 22:37:16 -- spdk/autotest.sh@78 -- # rm -f 00:06:03.417 22:37:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:03.417 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:03.418 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:03.418 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:03.418 22:37:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:03.418 22:37:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:03.418 22:37:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:03.418 22:37:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:03.418 22:37:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:03.418 22:37:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:03.418 22:37:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:03.418 22:37:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:03.418 22:37:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:03.418 22:37:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:03.418 22:37:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:03.418 22:37:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:03.418 22:37:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:03.418 22:37:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:03.418 22:37:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:03.418 22:37:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:03.418 22:37:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:03.418 22:37:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:03.418 22:37:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:03.418 22:37:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:03.418 22:37:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:03.418 22:37:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:03.418 22:37:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:03.418 22:37:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:03.418 22:37:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:03.418 22:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.418 22:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.418 22:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:03.418 22:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:03.418 22:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:03.418 No valid GPT data, bailing 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # pt= 00:06:03.418 22:37:17 -- scripts/common.sh@395 -- # return 1 00:06:03.418 22:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:03.418 1+0 records in 00:06:03.418 1+0 records out 00:06:03.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00394165 s, 266 MB/s 00:06:03.418 22:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.418 22:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.418 22:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:03.418 22:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:03.418 22:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:03.418 No valid GPT data, bailing 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # pt= 00:06:03.418 22:37:17 -- scripts/common.sh@395 -- # return 1 00:06:03.418 22:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:03.418 1+0 records in 00:06:03.418 1+0 records out 00:06:03.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435872 s, 241 MB/s 00:06:03.418 22:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.418 22:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.418 22:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:03.418 22:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:03.418 22:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:03.418 No valid GPT data, bailing 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # pt= 00:06:03.418 22:37:17 -- scripts/common.sh@395 -- # return 1 00:06:03.418 22:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:03.418 1+0 records in 00:06:03.418 1+0 records out 00:06:03.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00337738 s, 310 MB/s 00:06:03.418 22:37:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:03.418 22:37:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:03.418 22:37:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:03.418 22:37:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:03.418 22:37:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:03.418 No valid GPT data, bailing 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:03.418 22:37:17 -- scripts/common.sh@394 -- # pt= 00:06:03.418 22:37:17 -- scripts/common.sh@395 -- # return 1 00:06:03.418 22:37:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:03.418 1+0 records in 00:06:03.418 1+0 records out 00:06:03.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00395036 s, 265 MB/s 00:06:03.418 22:37:17 -- spdk/autotest.sh@105 -- # sync 00:06:03.418 22:37:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:03.418 22:37:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:03.418 22:37:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:05.321 22:37:19 -- spdk/autotest.sh@111 -- # uname -s 00:06:05.322 22:37:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:05.322 22:37:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:05.322 22:37:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:05.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:05.889 Hugepages 00:06:05.889 node hugesize free / total 00:06:05.889 node0 1048576kB 0 / 0 00:06:05.889 node0 2048kB 0 / 0 00:06:05.889 00:06:05.889 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:05.889 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:06.148 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:06.148 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:06.148 22:37:20 -- spdk/autotest.sh@117 -- # uname -s 00:06:06.148 22:37:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:06.148 22:37:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:06.148 22:37:20 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:06.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.974 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.974 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.974 22:37:21 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:07.912 22:37:22 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:07.912 22:37:22 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:07.912 22:37:22 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:07.912 22:37:22 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:07.912 22:37:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:07.912 22:37:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:07.912 22:37:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:07.912 22:37:22 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:07.912 22:37:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:08.171 22:37:22 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:08.171 22:37:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:08.171 22:37:22 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:08.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:08.430 Waiting for block devices as requested 00:06:08.430 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:08.689 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:08.689 22:37:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:08.689 22:37:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:08.689 22:37:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:08.689 22:37:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:08.689 22:37:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:08.689 22:37:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:08.689 22:37:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:08.689 22:37:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:08.689 22:37:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:08.689 22:37:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:08.689 22:37:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:08.689 22:37:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:08.689 22:37:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:08.689 22:37:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:08.689 22:37:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:08.689 22:37:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:08.689 22:37:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:08.689 22:37:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:08.689 22:37:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:08.689 22:37:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:08.689 22:37:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:08.690 22:37:23 -- common/autotest_common.sh@1541 -- # continue 00:06:08.690 22:37:23 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:08.690 22:37:23 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:08.690 22:37:23 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:08.690 22:37:23 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:08.690 22:37:23 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:08.690 22:37:23 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:08.690 22:37:23 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:08.690 22:37:23 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:08.690 22:37:23 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:08.690 22:37:23 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:08.690 22:37:23 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:08.690 22:37:23 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:08.690 22:37:23 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:08.690 22:37:23 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:08.690 22:37:23 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:08.690 22:37:23 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:08.690 22:37:23 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:08.690 22:37:23 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:08.690 22:37:23 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:08.690 22:37:23 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:08.690 22:37:23 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:08.690 22:37:23 -- common/autotest_common.sh@1541 -- # continue 00:06:08.690 22:37:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:08.690 22:37:23 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.690 22:37:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.690 22:37:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:08.690 22:37:23 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:08.690 22:37:23 -- common/autotest_common.sh@10 -- # set +x 00:06:08.690 22:37:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:09.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:09.518 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:09.518 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:09.518 22:37:24 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:09.518 22:37:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.518 22:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.518 22:37:24 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:09.518 22:37:24 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:09.518 22:37:24 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:09.518 22:37:24 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:09.518 22:37:24 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:09.518 22:37:24 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:09.518 22:37:24 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:09.518 22:37:24 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:09.518 22:37:24 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:09.518 22:37:24 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:09.518 22:37:24 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:09.518 22:37:24 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:09.518 22:37:24 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:09.778 22:37:24 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:09.778 22:37:24 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:09.778 22:37:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:09.778 22:37:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:09.778 22:37:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:09.778 22:37:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:09.778 22:37:24 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:09.778 22:37:24 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:09.778 22:37:24 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:09.778 22:37:24 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:09.778 22:37:24 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:09.778 22:37:24 -- common/autotest_common.sh@1570 -- # return 0 00:06:09.778 22:37:24 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:09.778 22:37:24 -- common/autotest_common.sh@1578 -- # return 0 00:06:09.778 22:37:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:09.778 22:37:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:09.778 22:37:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:09.778 22:37:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:09.778 22:37:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:09.778 22:37:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.778 22:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.778 22:37:24 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:09.778 22:37:24 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:09.778 22:37:24 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:09.778 22:37:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:09.778 22:37:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:09.778 22:37:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:09.778 22:37:24 -- common/autotest_common.sh@10 -- # set +x 00:06:09.778 ************************************ 00:06:09.778 START TEST env 00:06:09.778 ************************************ 00:06:09.778 22:37:24 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:09.778 * Looking for test storage... 00:06:09.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:09.778 22:37:24 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:09.778 22:37:24 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:09.778 22:37:24 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:09.778 22:37:24 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:09.778 22:37:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.778 22:37:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.778 22:37:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.778 22:37:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.778 22:37:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.778 22:37:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.778 22:37:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.778 22:37:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.778 22:37:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.778 22:37:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.778 22:37:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.778 22:37:24 env -- scripts/common.sh@344 -- # case "$op" in 00:06:09.778 22:37:24 env -- scripts/common.sh@345 -- # : 1 00:06:09.778 22:37:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.778 22:37:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.778 22:37:24 env -- scripts/common.sh@365 -- # decimal 1 00:06:10.038 22:37:24 env -- scripts/common.sh@353 -- # local d=1 00:06:10.038 22:37:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.038 22:37:24 env -- scripts/common.sh@355 -- # echo 1 00:06:10.038 22:37:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.038 22:37:24 env -- scripts/common.sh@366 -- # decimal 2 00:06:10.038 22:37:24 env -- scripts/common.sh@353 -- # local d=2 00:06:10.038 22:37:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.038 22:37:24 env -- scripts/common.sh@355 -- # echo 2 00:06:10.038 22:37:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.038 22:37:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.038 22:37:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.038 22:37:24 env -- scripts/common.sh@368 -- # return 0 00:06:10.038 22:37:24 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.038 22:37:24 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:10.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.038 --rc genhtml_branch_coverage=1 00:06:10.038 --rc genhtml_function_coverage=1 00:06:10.038 --rc genhtml_legend=1 00:06:10.038 --rc geninfo_all_blocks=1 00:06:10.038 --rc geninfo_unexecuted_blocks=1 00:06:10.038 00:06:10.038 ' 00:06:10.038 22:37:24 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:10.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.038 --rc genhtml_branch_coverage=1 00:06:10.038 --rc genhtml_function_coverage=1 00:06:10.038 --rc genhtml_legend=1 00:06:10.038 --rc geninfo_all_blocks=1 00:06:10.038 --rc geninfo_unexecuted_blocks=1 00:06:10.038 00:06:10.038 ' 00:06:10.038 22:37:24 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:10.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.038 --rc genhtml_branch_coverage=1 00:06:10.038 --rc genhtml_function_coverage=1 00:06:10.038 --rc genhtml_legend=1 00:06:10.038 --rc geninfo_all_blocks=1 00:06:10.038 --rc geninfo_unexecuted_blocks=1 00:06:10.038 00:06:10.038 ' 00:06:10.038 22:37:24 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:10.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.038 --rc genhtml_branch_coverage=1 00:06:10.038 --rc genhtml_function_coverage=1 00:06:10.038 --rc genhtml_legend=1 00:06:10.038 --rc geninfo_all_blocks=1 00:06:10.038 --rc geninfo_unexecuted_blocks=1 00:06:10.038 00:06:10.038 ' 00:06:10.038 22:37:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:10.038 22:37:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.038 22:37:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.038 22:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.038 ************************************ 00:06:10.038 START TEST env_memory 00:06:10.038 ************************************ 00:06:10.038 22:37:24 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:10.038 00:06:10.038 00:06:10.038 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.038 http://cunit.sourceforge.net/ 00:06:10.038 00:06:10.038 00:06:10.038 Suite: memory 00:06:10.038 Test: alloc and free memory map ...[2024-12-07 22:37:24.611965] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:10.038 passed 00:06:10.038 Test: mem map translation ...[2024-12-07 22:37:24.643210] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:10.038 [2024-12-07 22:37:24.643396] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:10.038 [2024-12-07 22:37:24.643578] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:10.038 [2024-12-07 22:37:24.643717] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:10.038 passed 00:06:10.038 Test: mem map registration ...[2024-12-07 22:37:24.707702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:10.038 [2024-12-07 22:37:24.707896] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:10.038 passed 00:06:10.038 Test: mem map adjacent registrations ...passed 00:06:10.038 00:06:10.038 Run Summary: Type Total Ran Passed Failed Inactive 00:06:10.038 suites 1 1 n/a 0 0 00:06:10.038 tests 4 4 4 0 0 00:06:10.038 asserts 152 152 152 0 n/a 00:06:10.038 00:06:10.038 Elapsed time = 0.213 seconds 00:06:10.038 00:06:10.038 real 0m0.231s 00:06:10.038 user 0m0.210s 00:06:10.038 sys 0m0.015s 00:06:10.038 22:37:24 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.038 22:37:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:10.038 ************************************ 00:06:10.038 END TEST env_memory 00:06:10.038 ************************************ 00:06:10.298 22:37:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:10.298 22:37:24 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.298 22:37:24 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.298 22:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:10.298 ************************************ 00:06:10.298 START TEST env_vtophys 00:06:10.298 ************************************ 00:06:10.298 22:37:24 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:10.298 EAL: lib.eal log level changed from notice to debug 00:06:10.298 EAL: Detected lcore 0 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 1 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 2 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 3 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 4 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 5 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 6 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 7 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 8 as core 0 on socket 0 00:06:10.298 EAL: Detected lcore 9 as core 0 on socket 0 00:06:10.298 EAL: Maximum logical cores by configuration: 128 00:06:10.298 EAL: Detected CPU lcores: 10 00:06:10.298 EAL: Detected NUMA nodes: 1 00:06:10.298 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:10.298 EAL: Detected shared linkage of DPDK 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:10.298 EAL: Registered [vdev] bus. 00:06:10.298 EAL: bus.vdev log level changed from disabled to notice 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:10.298 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:10.298 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:10.298 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:10.298 EAL: No shared files mode enabled, IPC will be disabled 00:06:10.298 EAL: No shared files mode enabled, IPC is disabled 00:06:10.298 EAL: Selected IOVA mode 'PA' 00:06:10.298 EAL: Probing VFIO support... 00:06:10.299 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:10.299 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:10.299 EAL: Ask a virtual area of 0x2e000 bytes 00:06:10.299 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:10.299 EAL: Setting up physically contiguous memory... 00:06:10.299 EAL: Setting maximum number of open files to 524288 00:06:10.299 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:10.299 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:10.299 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.299 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:10.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.299 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.299 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:10.299 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:10.299 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.299 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:10.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.299 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.299 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:10.299 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:10.299 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.299 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:10.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.299 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.299 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:10.299 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:10.299 EAL: Ask a virtual area of 0x61000 bytes 00:06:10.299 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:10.299 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:10.299 EAL: Ask a virtual area of 0x400000000 bytes 00:06:10.299 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:10.299 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:10.299 EAL: Hugepages will be freed exactly as allocated. 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: TSC frequency is ~2200000 KHz 00:06:10.299 EAL: Main lcore 0 is ready (tid=7ff8d8a1ca00;cpuset=[0]) 00:06:10.299 EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.299 EAL: Restoring previous memory policy: 0 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was expanded by 2MB 00:06:10.299 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:10.299 EAL: Mem event callback 'spdk:(nil)' registered 00:06:10.299 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:10.299 00:06:10.299 00:06:10.299 CUnit - A unit testing framework for C - Version 2.1-3 00:06:10.299 http://cunit.sourceforge.net/ 00:06:10.299 00:06:10.299 00:06:10.299 Suite: components_suite 00:06:10.299 Test: vtophys_malloc_test ...passed 00:06:10.299 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.299 EAL: Restoring previous memory policy: 4 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was expanded by 4MB 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was shrunk by 4MB 00:06:10.299 EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.299 EAL: Restoring previous memory policy: 4 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was expanded by 6MB 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was shrunk by 6MB 00:06:10.299 EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.299 EAL: Restoring previous memory policy: 4 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was expanded by 10MB 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was shrunk by 10MB 00:06:10.299 EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.299 EAL: Restoring previous memory policy: 4 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was expanded by 18MB 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was shrunk by 18MB 00:06:10.299 EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.299 EAL: Restoring previous memory policy: 4 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was expanded by 34MB 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was shrunk by 34MB 00:06:10.299 EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.299 EAL: Restoring previous memory policy: 4 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was expanded by 66MB 00:06:10.299 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.299 EAL: request: mp_malloc_sync 00:06:10.299 EAL: No shared files mode enabled, IPC is disabled 00:06:10.299 EAL: Heap on socket 0 was shrunk by 66MB 00:06:10.299 EAL: Trying to obtain current memory policy. 00:06:10.299 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.557 EAL: Restoring previous memory policy: 4 00:06:10.557 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.557 EAL: request: mp_malloc_sync 00:06:10.557 EAL: No shared files mode enabled, IPC is disabled 00:06:10.557 EAL: Heap on socket 0 was expanded by 130MB 00:06:10.557 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.557 EAL: request: mp_malloc_sync 00:06:10.557 EAL: No shared files mode enabled, IPC is disabled 00:06:10.557 EAL: Heap on socket 0 was shrunk by 130MB 00:06:10.557 EAL: Trying to obtain current memory policy. 00:06:10.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.557 EAL: Restoring previous memory policy: 4 00:06:10.557 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.557 EAL: request: mp_malloc_sync 00:06:10.557 EAL: No shared files mode enabled, IPC is disabled 00:06:10.557 EAL: Heap on socket 0 was expanded by 258MB 00:06:10.557 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.557 EAL: request: mp_malloc_sync 00:06:10.557 EAL: No shared files mode enabled, IPC is disabled 00:06:10.557 EAL: Heap on socket 0 was shrunk by 258MB 00:06:10.557 EAL: Trying to obtain current memory policy. 00:06:10.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.557 EAL: Restoring previous memory policy: 4 00:06:10.557 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.557 EAL: request: mp_malloc_sync 00:06:10.557 EAL: No shared files mode enabled, IPC is disabled 00:06:10.557 EAL: Heap on socket 0 was expanded by 514MB 00:06:10.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.816 EAL: request: mp_malloc_sync 00:06:10.816 EAL: No shared files mode enabled, IPC is disabled 00:06:10.816 EAL: Heap on socket 0 was shrunk by 514MB 00:06:10.816 EAL: Trying to obtain current memory policy. 00:06:10.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:10.816 EAL: Restoring previous memory policy: 4 00:06:10.816 EAL: Calling mem event callback 'spdk:(nil)' 00:06:10.816 EAL: request: mp_malloc_sync 00:06:10.816 EAL: No shared files mode enabled, IPC is disabled 00:06:10.816 EAL: Heap on socket 0 was expanded by 1026MB 00:06:11.074 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.074 passed 00:06:11.074 00:06:11.074 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.074 suites 1 1 n/a 0 0 00:06:11.074 tests 2 2 2 0 0 00:06:11.074 asserts 5974 5974 5974 0 n/a 00:06:11.074 00:06:11.074 Elapsed time = 0.703 seconds 00:06:11.074 EAL: request: mp_malloc_sync 00:06:11.074 EAL: No shared files mode enabled, IPC is disabled 00:06:11.074 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:11.074 EAL: Calling mem event callback 'spdk:(nil)' 00:06:11.074 EAL: request: mp_malloc_sync 00:06:11.074 EAL: No shared files mode enabled, IPC is disabled 00:06:11.074 EAL: Heap on socket 0 was shrunk by 2MB 00:06:11.074 EAL: No shared files mode enabled, IPC is disabled 00:06:11.074 EAL: No shared files mode enabled, IPC is disabled 00:06:11.074 EAL: No shared files mode enabled, IPC is disabled 00:06:11.074 00:06:11.074 real 0m0.900s 00:06:11.074 user 0m0.454s 00:06:11.074 sys 0m0.311s 00:06:11.074 22:37:25 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.074 22:37:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:11.074 ************************************ 00:06:11.074 END TEST env_vtophys 00:06:11.074 ************************************ 00:06:11.074 22:37:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:11.074 22:37:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.074 22:37:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.074 22:37:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.074 ************************************ 00:06:11.074 START TEST env_pci 00:06:11.074 ************************************ 00:06:11.074 22:37:25 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:11.074 00:06:11.074 00:06:11.074 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.074 http://cunit.sourceforge.net/ 00:06:11.074 00:06:11.074 00:06:11.074 Suite: pci 00:06:11.074 Test: pci_hook ...[2024-12-07 22:37:25.814112] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68829 has claimed it 00:06:11.074 passed 00:06:11.074 00:06:11.074 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.074 suites 1 1 n/a 0 0 00:06:11.074 tests 1 1 1 0 0 00:06:11.074 asserts 25 25 25 0 n/a 00:06:11.074 00:06:11.074 Elapsed time = 0.002 seconds 00:06:11.074 EAL: Cannot find device (10000:00:01.0) 00:06:11.074 EAL: Failed to attach device on primary process 00:06:11.074 00:06:11.074 real 0m0.019s 00:06:11.074 user 0m0.009s 00:06:11.074 sys 0m0.010s 00:06:11.074 22:37:25 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.074 22:37:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:11.074 ************************************ 00:06:11.074 END TEST env_pci 00:06:11.074 ************************************ 00:06:11.333 22:37:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:11.333 22:37:25 env -- env/env.sh@15 -- # uname 00:06:11.333 22:37:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:11.333 22:37:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:11.333 22:37:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:11.333 22:37:25 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:11.333 22:37:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.333 22:37:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.333 ************************************ 00:06:11.333 START TEST env_dpdk_post_init 00:06:11.333 ************************************ 00:06:11.333 22:37:25 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:11.333 EAL: Detected CPU lcores: 10 00:06:11.333 EAL: Detected NUMA nodes: 1 00:06:11.333 EAL: Detected shared linkage of DPDK 00:06:11.333 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:11.333 EAL: Selected IOVA mode 'PA' 00:06:11.333 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:11.333 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:11.333 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:11.333 Starting DPDK initialization... 00:06:11.333 Starting SPDK post initialization... 00:06:11.333 SPDK NVMe probe 00:06:11.333 Attaching to 0000:00:10.0 00:06:11.333 Attaching to 0000:00:11.0 00:06:11.333 Attached to 0000:00:10.0 00:06:11.333 Attached to 0000:00:11.0 00:06:11.333 Cleaning up... 00:06:11.333 00:06:11.333 real 0m0.175s 00:06:11.333 user 0m0.045s 00:06:11.333 sys 0m0.030s 00:06:11.333 22:37:26 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.333 22:37:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:11.333 ************************************ 00:06:11.333 END TEST env_dpdk_post_init 00:06:11.333 ************************************ 00:06:11.333 22:37:26 env -- env/env.sh@26 -- # uname 00:06:11.591 22:37:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:11.591 22:37:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:11.591 22:37:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.591 22:37:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.591 22:37:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.591 ************************************ 00:06:11.591 START TEST env_mem_callbacks 00:06:11.591 ************************************ 00:06:11.591 22:37:26 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:11.591 EAL: Detected CPU lcores: 10 00:06:11.591 EAL: Detected NUMA nodes: 1 00:06:11.591 EAL: Detected shared linkage of DPDK 00:06:11.591 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:11.591 EAL: Selected IOVA mode 'PA' 00:06:11.591 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:11.591 00:06:11.591 00:06:11.591 CUnit - A unit testing framework for C - Version 2.1-3 00:06:11.591 http://cunit.sourceforge.net/ 00:06:11.591 00:06:11.591 00:06:11.591 Suite: memory 00:06:11.591 Test: test ... 00:06:11.591 register 0x200000200000 2097152 00:06:11.591 malloc 3145728 00:06:11.591 register 0x200000400000 4194304 00:06:11.591 buf 0x200000500000 len 3145728 PASSED 00:06:11.591 malloc 64 00:06:11.591 buf 0x2000004fff40 len 64 PASSED 00:06:11.591 malloc 4194304 00:06:11.591 register 0x200000800000 6291456 00:06:11.591 buf 0x200000a00000 len 4194304 PASSED 00:06:11.591 free 0x200000500000 3145728 00:06:11.591 free 0x2000004fff40 64 00:06:11.591 unregister 0x200000400000 4194304 PASSED 00:06:11.591 free 0x200000a00000 4194304 00:06:11.591 unregister 0x200000800000 6291456 PASSED 00:06:11.591 malloc 8388608 00:06:11.591 register 0x200000400000 10485760 00:06:11.591 buf 0x200000600000 len 8388608 PASSED 00:06:11.591 free 0x200000600000 8388608 00:06:11.591 unregister 0x200000400000 10485760 PASSED 00:06:11.591 passed 00:06:11.591 00:06:11.591 Run Summary: Type Total Ran Passed Failed Inactive 00:06:11.591 suites 1 1 n/a 0 0 00:06:11.591 tests 1 1 1 0 0 00:06:11.591 asserts 15 15 15 0 n/a 00:06:11.591 00:06:11.591 Elapsed time = 0.007 seconds 00:06:11.591 00:06:11.592 real 0m0.136s 00:06:11.592 user 0m0.015s 00:06:11.592 sys 0m0.021s 00:06:11.592 22:37:26 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.592 22:37:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:11.592 ************************************ 00:06:11.592 END TEST env_mem_callbacks 00:06:11.592 ************************************ 00:06:11.592 00:06:11.592 real 0m1.925s 00:06:11.592 user 0m0.941s 00:06:11.592 sys 0m0.633s 00:06:11.592 22:37:26 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.592 22:37:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:11.592 ************************************ 00:06:11.592 END TEST env 00:06:11.592 ************************************ 00:06:11.592 22:37:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:11.592 22:37:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.592 22:37:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.592 22:37:26 -- common/autotest_common.sh@10 -- # set +x 00:06:11.592 ************************************ 00:06:11.592 START TEST rpc 00:06:11.592 ************************************ 00:06:11.592 22:37:26 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:11.850 * Looking for test storage... 00:06:11.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:11.850 22:37:26 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:11.850 22:37:26 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:11.850 22:37:26 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:11.850 22:37:26 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:11.850 22:37:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:11.850 22:37:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.850 22:37:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:11.850 22:37:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:11.850 22:37:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:11.850 22:37:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:11.850 22:37:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:11.850 22:37:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:11.850 22:37:26 rpc -- scripts/common.sh@345 -- # : 1 00:06:11.850 22:37:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:11.850 22:37:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.850 22:37:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:11.850 22:37:26 rpc -- scripts/common.sh@353 -- # local d=1 00:06:11.850 22:37:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.850 22:37:26 rpc -- scripts/common.sh@355 -- # echo 1 00:06:11.850 22:37:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:11.850 22:37:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@353 -- # local d=2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.850 22:37:26 rpc -- scripts/common.sh@355 -- # echo 2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:11.850 22:37:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:11.850 22:37:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:11.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.850 22:37:26 rpc -- scripts/common.sh@368 -- # return 0 00:06:11.850 22:37:26 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.850 22:37:26 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.851 --rc genhtml_branch_coverage=1 00:06:11.851 --rc genhtml_function_coverage=1 00:06:11.851 --rc genhtml_legend=1 00:06:11.851 --rc geninfo_all_blocks=1 00:06:11.851 --rc geninfo_unexecuted_blocks=1 00:06:11.851 00:06:11.851 ' 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.851 --rc genhtml_branch_coverage=1 00:06:11.851 --rc genhtml_function_coverage=1 00:06:11.851 --rc genhtml_legend=1 00:06:11.851 --rc geninfo_all_blocks=1 00:06:11.851 --rc geninfo_unexecuted_blocks=1 00:06:11.851 00:06:11.851 ' 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.851 --rc genhtml_branch_coverage=1 00:06:11.851 --rc genhtml_function_coverage=1 00:06:11.851 --rc genhtml_legend=1 00:06:11.851 --rc geninfo_all_blocks=1 00:06:11.851 --rc geninfo_unexecuted_blocks=1 00:06:11.851 00:06:11.851 ' 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:11.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.851 --rc genhtml_branch_coverage=1 00:06:11.851 --rc genhtml_function_coverage=1 00:06:11.851 --rc genhtml_legend=1 00:06:11.851 --rc geninfo_all_blocks=1 00:06:11.851 --rc geninfo_unexecuted_blocks=1 00:06:11.851 00:06:11.851 ' 00:06:11.851 22:37:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68952 00:06:11.851 22:37:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:11.851 22:37:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68952 00:06:11.851 22:37:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@831 -- # '[' -z 68952 ']' 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.851 22:37:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.851 [2024-12-07 22:37:26.583181] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:11.851 [2024-12-07 22:37:26.583290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68952 ] 00:06:12.110 [2024-12-07 22:37:26.719365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.110 [2024-12-07 22:37:26.752913] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:12.110 [2024-12-07 22:37:26.752976] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68952' to capture a snapshot of events at runtime. 00:06:12.110 [2024-12-07 22:37:26.752986] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:12.110 [2024-12-07 22:37:26.752993] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:12.110 [2024-12-07 22:37:26.752999] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68952 for offline analysis/debug. 00:06:12.110 [2024-12-07 22:37:26.753027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.110 [2024-12-07 22:37:26.788560] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.368 22:37:26 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.368 22:37:26 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.368 22:37:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.368 22:37:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.368 22:37:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:12.368 22:37:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:12.368 22:37:26 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.368 22:37:26 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.368 22:37:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.368 ************************************ 00:06:12.368 START TEST rpc_integrity 00:06:12.368 ************************************ 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:12.368 22:37:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.368 22:37:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:12.368 22:37:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:12.368 22:37:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:12.368 22:37:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.368 22:37:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:12.368 22:37:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.368 22:37:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.368 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.368 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:12.368 { 00:06:12.368 "name": "Malloc0", 00:06:12.368 "aliases": [ 00:06:12.368 "4dad1a5d-ae26-4afc-a132-cccf41e73a97" 00:06:12.368 ], 00:06:12.368 "product_name": "Malloc disk", 00:06:12.368 "block_size": 512, 00:06:12.368 "num_blocks": 16384, 00:06:12.368 "uuid": "4dad1a5d-ae26-4afc-a132-cccf41e73a97", 00:06:12.368 "assigned_rate_limits": { 00:06:12.368 "rw_ios_per_sec": 0, 00:06:12.368 "rw_mbytes_per_sec": 0, 00:06:12.368 "r_mbytes_per_sec": 0, 00:06:12.368 "w_mbytes_per_sec": 0 00:06:12.368 }, 00:06:12.368 "claimed": false, 00:06:12.368 "zoned": false, 00:06:12.368 "supported_io_types": { 00:06:12.368 "read": true, 00:06:12.368 "write": true, 00:06:12.368 "unmap": true, 00:06:12.368 "flush": true, 00:06:12.368 "reset": true, 00:06:12.368 "nvme_admin": false, 00:06:12.368 "nvme_io": false, 00:06:12.368 "nvme_io_md": false, 00:06:12.368 "write_zeroes": true, 00:06:12.368 "zcopy": true, 00:06:12.368 "get_zone_info": false, 00:06:12.368 "zone_management": false, 00:06:12.368 "zone_append": false, 00:06:12.368 "compare": false, 00:06:12.368 "compare_and_write": false, 00:06:12.368 "abort": true, 00:06:12.368 "seek_hole": false, 00:06:12.368 "seek_data": false, 00:06:12.368 "copy": true, 00:06:12.368 "nvme_iov_md": false 00:06:12.368 }, 00:06:12.368 "memory_domains": [ 00:06:12.368 { 00:06:12.368 "dma_device_id": "system", 00:06:12.368 "dma_device_type": 1 00:06:12.368 }, 00:06:12.368 { 00:06:12.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.369 "dma_device_type": 2 00:06:12.369 } 00:06:12.369 ], 00:06:12.369 "driver_specific": {} 00:06:12.369 } 00:06:12.369 ]' 00:06:12.369 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:12.369 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:12.369 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:12.369 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.369 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.369 [2024-12-07 22:37:27.066699] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:12.369 [2024-12-07 22:37:27.066751] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:12.369 [2024-12-07 22:37:27.066767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12d9500 00:06:12.369 [2024-12-07 22:37:27.066775] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:12.369 [2024-12-07 22:37:27.068274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:12.369 [2024-12-07 22:37:27.068318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:12.369 Passthru0 00:06:12.369 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.369 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:12.369 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.369 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.369 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.369 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:12.369 { 00:06:12.369 "name": "Malloc0", 00:06:12.369 "aliases": [ 00:06:12.369 "4dad1a5d-ae26-4afc-a132-cccf41e73a97" 00:06:12.369 ], 00:06:12.369 "product_name": "Malloc disk", 00:06:12.369 "block_size": 512, 00:06:12.369 "num_blocks": 16384, 00:06:12.369 "uuid": "4dad1a5d-ae26-4afc-a132-cccf41e73a97", 00:06:12.369 "assigned_rate_limits": { 00:06:12.369 "rw_ios_per_sec": 0, 00:06:12.369 "rw_mbytes_per_sec": 0, 00:06:12.369 "r_mbytes_per_sec": 0, 00:06:12.369 "w_mbytes_per_sec": 0 00:06:12.369 }, 00:06:12.369 "claimed": true, 00:06:12.369 "claim_type": "exclusive_write", 00:06:12.369 "zoned": false, 00:06:12.369 "supported_io_types": { 00:06:12.369 "read": true, 00:06:12.369 "write": true, 00:06:12.369 "unmap": true, 00:06:12.369 "flush": true, 00:06:12.369 "reset": true, 00:06:12.369 "nvme_admin": false, 00:06:12.369 "nvme_io": false, 00:06:12.369 "nvme_io_md": false, 00:06:12.369 "write_zeroes": true, 00:06:12.369 "zcopy": true, 00:06:12.369 "get_zone_info": false, 00:06:12.369 "zone_management": false, 00:06:12.369 "zone_append": false, 00:06:12.369 "compare": false, 00:06:12.369 "compare_and_write": false, 00:06:12.369 "abort": true, 00:06:12.369 "seek_hole": false, 00:06:12.369 "seek_data": false, 00:06:12.369 "copy": true, 00:06:12.369 "nvme_iov_md": false 00:06:12.369 }, 00:06:12.369 "memory_domains": [ 00:06:12.369 { 00:06:12.369 "dma_device_id": "system", 00:06:12.369 "dma_device_type": 1 00:06:12.369 }, 00:06:12.369 { 00:06:12.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.369 "dma_device_type": 2 00:06:12.369 } 00:06:12.369 ], 00:06:12.369 "driver_specific": {} 00:06:12.369 }, 00:06:12.369 { 00:06:12.369 "name": "Passthru0", 00:06:12.369 "aliases": [ 00:06:12.369 "9b348dd1-db18-5750-830e-bd3506401d01" 00:06:12.369 ], 00:06:12.369 "product_name": "passthru", 00:06:12.369 "block_size": 512, 00:06:12.369 "num_blocks": 16384, 00:06:12.369 "uuid": "9b348dd1-db18-5750-830e-bd3506401d01", 00:06:12.369 "assigned_rate_limits": { 00:06:12.369 "rw_ios_per_sec": 0, 00:06:12.369 "rw_mbytes_per_sec": 0, 00:06:12.369 "r_mbytes_per_sec": 0, 00:06:12.369 "w_mbytes_per_sec": 0 00:06:12.369 }, 00:06:12.369 "claimed": false, 00:06:12.369 "zoned": false, 00:06:12.369 "supported_io_types": { 00:06:12.369 "read": true, 00:06:12.369 "write": true, 00:06:12.369 "unmap": true, 00:06:12.369 "flush": true, 00:06:12.369 "reset": true, 00:06:12.369 "nvme_admin": false, 00:06:12.369 "nvme_io": false, 00:06:12.369 "nvme_io_md": false, 00:06:12.369 "write_zeroes": true, 00:06:12.369 "zcopy": true, 00:06:12.369 "get_zone_info": false, 00:06:12.369 "zone_management": false, 00:06:12.369 "zone_append": false, 00:06:12.369 "compare": false, 00:06:12.369 "compare_and_write": false, 00:06:12.369 "abort": true, 00:06:12.369 "seek_hole": false, 00:06:12.369 "seek_data": false, 00:06:12.369 "copy": true, 00:06:12.369 "nvme_iov_md": false 00:06:12.369 }, 00:06:12.369 "memory_domains": [ 00:06:12.369 { 00:06:12.369 "dma_device_id": "system", 00:06:12.369 "dma_device_type": 1 00:06:12.369 }, 00:06:12.369 { 00:06:12.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.369 "dma_device_type": 2 00:06:12.369 } 00:06:12.369 ], 00:06:12.369 "driver_specific": { 00:06:12.369 "passthru": { 00:06:12.369 "name": "Passthru0", 00:06:12.369 "base_bdev_name": "Malloc0" 00:06:12.369 } 00:06:12.369 } 00:06:12.369 } 00:06:12.369 ]' 00:06:12.369 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:12.628 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:12.628 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.628 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.628 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.628 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:12.628 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:12.628 ************************************ 00:06:12.628 END TEST rpc_integrity 00:06:12.628 ************************************ 00:06:12.628 22:37:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:12.628 00:06:12.628 real 0m0.330s 00:06:12.628 user 0m0.219s 00:06:12.628 sys 0m0.043s 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.628 22:37:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:12.628 22:37:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:12.628 22:37:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.628 22:37:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.628 22:37:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.628 ************************************ 00:06:12.628 START TEST rpc_plugins 00:06:12.628 ************************************ 00:06:12.628 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:12.628 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:12.628 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.628 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.628 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.628 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:12.628 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:12.628 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.628 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.628 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.628 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:12.628 { 00:06:12.628 "name": "Malloc1", 00:06:12.628 "aliases": [ 00:06:12.628 "97c6380e-7e2b-45f3-b9f6-82345325e8d9" 00:06:12.628 ], 00:06:12.628 "product_name": "Malloc disk", 00:06:12.628 "block_size": 4096, 00:06:12.628 "num_blocks": 256, 00:06:12.628 "uuid": "97c6380e-7e2b-45f3-b9f6-82345325e8d9", 00:06:12.628 "assigned_rate_limits": { 00:06:12.628 "rw_ios_per_sec": 0, 00:06:12.628 "rw_mbytes_per_sec": 0, 00:06:12.628 "r_mbytes_per_sec": 0, 00:06:12.628 "w_mbytes_per_sec": 0 00:06:12.628 }, 00:06:12.628 "claimed": false, 00:06:12.628 "zoned": false, 00:06:12.628 "supported_io_types": { 00:06:12.628 "read": true, 00:06:12.628 "write": true, 00:06:12.628 "unmap": true, 00:06:12.628 "flush": true, 00:06:12.628 "reset": true, 00:06:12.628 "nvme_admin": false, 00:06:12.628 "nvme_io": false, 00:06:12.628 "nvme_io_md": false, 00:06:12.628 "write_zeroes": true, 00:06:12.628 "zcopy": true, 00:06:12.628 "get_zone_info": false, 00:06:12.628 "zone_management": false, 00:06:12.628 "zone_append": false, 00:06:12.628 "compare": false, 00:06:12.628 "compare_and_write": false, 00:06:12.628 "abort": true, 00:06:12.628 "seek_hole": false, 00:06:12.628 "seek_data": false, 00:06:12.628 "copy": true, 00:06:12.628 "nvme_iov_md": false 00:06:12.628 }, 00:06:12.628 "memory_domains": [ 00:06:12.628 { 00:06:12.628 "dma_device_id": "system", 00:06:12.628 "dma_device_type": 1 00:06:12.628 }, 00:06:12.628 { 00:06:12.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:12.629 "dma_device_type": 2 00:06:12.629 } 00:06:12.629 ], 00:06:12.629 "driver_specific": {} 00:06:12.629 } 00:06:12.629 ]' 00:06:12.629 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:12.629 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:12.629 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:12.629 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.629 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.888 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.888 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:12.888 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.888 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.888 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.888 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:12.888 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:12.888 ************************************ 00:06:12.888 END TEST rpc_plugins 00:06:12.888 ************************************ 00:06:12.888 22:37:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:12.888 00:06:12.888 real 0m0.169s 00:06:12.888 user 0m0.118s 00:06:12.888 sys 0m0.016s 00:06:12.888 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:12.888 22:37:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:12.888 22:37:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:12.888 22:37:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:12.888 22:37:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:12.888 22:37:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.888 ************************************ 00:06:12.888 START TEST rpc_trace_cmd_test 00:06:12.888 ************************************ 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:12.888 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68952", 00:06:12.888 "tpoint_group_mask": "0x8", 00:06:12.888 "iscsi_conn": { 00:06:12.888 "mask": "0x2", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "scsi": { 00:06:12.888 "mask": "0x4", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "bdev": { 00:06:12.888 "mask": "0x8", 00:06:12.888 "tpoint_mask": "0xffffffffffffffff" 00:06:12.888 }, 00:06:12.888 "nvmf_rdma": { 00:06:12.888 "mask": "0x10", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "nvmf_tcp": { 00:06:12.888 "mask": "0x20", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "ftl": { 00:06:12.888 "mask": "0x40", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "blobfs": { 00:06:12.888 "mask": "0x80", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "dsa": { 00:06:12.888 "mask": "0x200", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "thread": { 00:06:12.888 "mask": "0x400", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "nvme_pcie": { 00:06:12.888 "mask": "0x800", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "iaa": { 00:06:12.888 "mask": "0x1000", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "nvme_tcp": { 00:06:12.888 "mask": "0x2000", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "bdev_nvme": { 00:06:12.888 "mask": "0x4000", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "sock": { 00:06:12.888 "mask": "0x8000", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "blob": { 00:06:12.888 "mask": "0x10000", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 }, 00:06:12.888 "bdev_raid": { 00:06:12.888 "mask": "0x20000", 00:06:12.888 "tpoint_mask": "0x0" 00:06:12.888 } 00:06:12.888 }' 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:12.888 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:13.147 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:13.147 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:13.147 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:13.147 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:13.147 ************************************ 00:06:13.147 END TEST rpc_trace_cmd_test 00:06:13.147 ************************************ 00:06:13.147 22:37:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:13.147 00:06:13.147 real 0m0.286s 00:06:13.147 user 0m0.247s 00:06:13.147 sys 0m0.028s 00:06:13.147 22:37:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.147 22:37:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:13.147 22:37:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:13.147 22:37:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:13.147 22:37:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:13.147 22:37:27 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.147 22:37:27 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.147 22:37:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.147 ************************************ 00:06:13.147 START TEST rpc_daemon_integrity 00:06:13.147 ************************************ 00:06:13.147 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:13.147 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:13.147 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.147 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.147 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.147 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:13.147 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.406 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:13.406 { 00:06:13.406 "name": "Malloc2", 00:06:13.406 "aliases": [ 00:06:13.406 "6ce254fb-fc57-4d29-983b-76eb891f4663" 00:06:13.406 ], 00:06:13.406 "product_name": "Malloc disk", 00:06:13.406 "block_size": 512, 00:06:13.406 "num_blocks": 16384, 00:06:13.406 "uuid": "6ce254fb-fc57-4d29-983b-76eb891f4663", 00:06:13.406 "assigned_rate_limits": { 00:06:13.406 "rw_ios_per_sec": 0, 00:06:13.406 "rw_mbytes_per_sec": 0, 00:06:13.406 "r_mbytes_per_sec": 0, 00:06:13.406 "w_mbytes_per_sec": 0 00:06:13.406 }, 00:06:13.406 "claimed": false, 00:06:13.406 "zoned": false, 00:06:13.406 "supported_io_types": { 00:06:13.406 "read": true, 00:06:13.406 "write": true, 00:06:13.406 "unmap": true, 00:06:13.406 "flush": true, 00:06:13.406 "reset": true, 00:06:13.406 "nvme_admin": false, 00:06:13.406 "nvme_io": false, 00:06:13.406 "nvme_io_md": false, 00:06:13.406 "write_zeroes": true, 00:06:13.406 "zcopy": true, 00:06:13.406 "get_zone_info": false, 00:06:13.406 "zone_management": false, 00:06:13.406 "zone_append": false, 00:06:13.406 "compare": false, 00:06:13.406 "compare_and_write": false, 00:06:13.406 "abort": true, 00:06:13.406 "seek_hole": false, 00:06:13.406 "seek_data": false, 00:06:13.406 "copy": true, 00:06:13.406 "nvme_iov_md": false 00:06:13.406 }, 00:06:13.406 "memory_domains": [ 00:06:13.406 { 00:06:13.406 "dma_device_id": "system", 00:06:13.406 "dma_device_type": 1 00:06:13.406 }, 00:06:13.406 { 00:06:13.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.407 "dma_device_type": 2 00:06:13.407 } 00:06:13.407 ], 00:06:13.407 "driver_specific": {} 00:06:13.407 } 00:06:13.407 ]' 00:06:13.407 22:37:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 [2024-12-07 22:37:28.027097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:13.407 [2024-12-07 22:37:28.027141] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:13.407 [2024-12-07 22:37:28.027158] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1227d70 00:06:13.407 [2024-12-07 22:37:28.027167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:13.407 [2024-12-07 22:37:28.028377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:13.407 [2024-12-07 22:37:28.028411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:13.407 Passthru0 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:13.407 { 00:06:13.407 "name": "Malloc2", 00:06:13.407 "aliases": [ 00:06:13.407 "6ce254fb-fc57-4d29-983b-76eb891f4663" 00:06:13.407 ], 00:06:13.407 "product_name": "Malloc disk", 00:06:13.407 "block_size": 512, 00:06:13.407 "num_blocks": 16384, 00:06:13.407 "uuid": "6ce254fb-fc57-4d29-983b-76eb891f4663", 00:06:13.407 "assigned_rate_limits": { 00:06:13.407 "rw_ios_per_sec": 0, 00:06:13.407 "rw_mbytes_per_sec": 0, 00:06:13.407 "r_mbytes_per_sec": 0, 00:06:13.407 "w_mbytes_per_sec": 0 00:06:13.407 }, 00:06:13.407 "claimed": true, 00:06:13.407 "claim_type": "exclusive_write", 00:06:13.407 "zoned": false, 00:06:13.407 "supported_io_types": { 00:06:13.407 "read": true, 00:06:13.407 "write": true, 00:06:13.407 "unmap": true, 00:06:13.407 "flush": true, 00:06:13.407 "reset": true, 00:06:13.407 "nvme_admin": false, 00:06:13.407 "nvme_io": false, 00:06:13.407 "nvme_io_md": false, 00:06:13.407 "write_zeroes": true, 00:06:13.407 "zcopy": true, 00:06:13.407 "get_zone_info": false, 00:06:13.407 "zone_management": false, 00:06:13.407 "zone_append": false, 00:06:13.407 "compare": false, 00:06:13.407 "compare_and_write": false, 00:06:13.407 "abort": true, 00:06:13.407 "seek_hole": false, 00:06:13.407 "seek_data": false, 00:06:13.407 "copy": true, 00:06:13.407 "nvme_iov_md": false 00:06:13.407 }, 00:06:13.407 "memory_domains": [ 00:06:13.407 { 00:06:13.407 "dma_device_id": "system", 00:06:13.407 "dma_device_type": 1 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.407 "dma_device_type": 2 00:06:13.407 } 00:06:13.407 ], 00:06:13.407 "driver_specific": {} 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "name": "Passthru0", 00:06:13.407 "aliases": [ 00:06:13.407 "c5a4c8f6-eb73-57bf-988c-96e62cb934ba" 00:06:13.407 ], 00:06:13.407 "product_name": "passthru", 00:06:13.407 "block_size": 512, 00:06:13.407 "num_blocks": 16384, 00:06:13.407 "uuid": "c5a4c8f6-eb73-57bf-988c-96e62cb934ba", 00:06:13.407 "assigned_rate_limits": { 00:06:13.407 "rw_ios_per_sec": 0, 00:06:13.407 "rw_mbytes_per_sec": 0, 00:06:13.407 "r_mbytes_per_sec": 0, 00:06:13.407 "w_mbytes_per_sec": 0 00:06:13.407 }, 00:06:13.407 "claimed": false, 00:06:13.407 "zoned": false, 00:06:13.407 "supported_io_types": { 00:06:13.407 "read": true, 00:06:13.407 "write": true, 00:06:13.407 "unmap": true, 00:06:13.407 "flush": true, 00:06:13.407 "reset": true, 00:06:13.407 "nvme_admin": false, 00:06:13.407 "nvme_io": false, 00:06:13.407 "nvme_io_md": false, 00:06:13.407 "write_zeroes": true, 00:06:13.407 "zcopy": true, 00:06:13.407 "get_zone_info": false, 00:06:13.407 "zone_management": false, 00:06:13.407 "zone_append": false, 00:06:13.407 "compare": false, 00:06:13.407 "compare_and_write": false, 00:06:13.407 "abort": true, 00:06:13.407 "seek_hole": false, 00:06:13.407 "seek_data": false, 00:06:13.407 "copy": true, 00:06:13.407 "nvme_iov_md": false 00:06:13.407 }, 00:06:13.407 "memory_domains": [ 00:06:13.407 { 00:06:13.407 "dma_device_id": "system", 00:06:13.407 "dma_device_type": 1 00:06:13.407 }, 00:06:13.407 { 00:06:13.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:13.407 "dma_device_type": 2 00:06:13.407 } 00:06:13.407 ], 00:06:13.407 "driver_specific": { 00:06:13.407 "passthru": { 00:06:13.407 "name": "Passthru0", 00:06:13.407 "base_bdev_name": "Malloc2" 00:06:13.407 } 00:06:13.407 } 00:06:13.407 } 00:06:13.407 ]' 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:13.407 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:13.666 22:37:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:13.666 00:06:13.666 real 0m0.347s 00:06:13.666 user 0m0.244s 00:06:13.666 sys 0m0.031s 00:06:13.666 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.666 22:37:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:13.666 ************************************ 00:06:13.666 END TEST rpc_daemon_integrity 00:06:13.666 ************************************ 00:06:13.666 22:37:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:13.666 22:37:28 rpc -- rpc/rpc.sh@84 -- # killprocess 68952 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@950 -- # '[' -z 68952 ']' 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@954 -- # kill -0 68952 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@955 -- # uname 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68952 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.666 killing process with pid 68952 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68952' 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@969 -- # kill 68952 00:06:13.666 22:37:28 rpc -- common/autotest_common.sh@974 -- # wait 68952 00:06:13.925 00:06:13.925 real 0m2.197s 00:06:13.925 user 0m3.007s 00:06:13.925 sys 0m0.560s 00:06:13.925 22:37:28 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.925 22:37:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.925 ************************************ 00:06:13.925 END TEST rpc 00:06:13.925 ************************************ 00:06:13.925 22:37:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:13.925 22:37:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.925 22:37:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.925 22:37:28 -- common/autotest_common.sh@10 -- # set +x 00:06:13.925 ************************************ 00:06:13.925 START TEST skip_rpc 00:06:13.925 ************************************ 00:06:13.925 22:37:28 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:13.925 * Looking for test storage... 00:06:13.925 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:13.925 22:37:28 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:13.925 22:37:28 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:13.925 22:37:28 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.185 22:37:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:14.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.185 --rc genhtml_branch_coverage=1 00:06:14.185 --rc genhtml_function_coverage=1 00:06:14.185 --rc genhtml_legend=1 00:06:14.185 --rc geninfo_all_blocks=1 00:06:14.185 --rc geninfo_unexecuted_blocks=1 00:06:14.185 00:06:14.185 ' 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:14.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.185 --rc genhtml_branch_coverage=1 00:06:14.185 --rc genhtml_function_coverage=1 00:06:14.185 --rc genhtml_legend=1 00:06:14.185 --rc geninfo_all_blocks=1 00:06:14.185 --rc geninfo_unexecuted_blocks=1 00:06:14.185 00:06:14.185 ' 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:14.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.185 --rc genhtml_branch_coverage=1 00:06:14.185 --rc genhtml_function_coverage=1 00:06:14.185 --rc genhtml_legend=1 00:06:14.185 --rc geninfo_all_blocks=1 00:06:14.185 --rc geninfo_unexecuted_blocks=1 00:06:14.185 00:06:14.185 ' 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:14.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.185 --rc genhtml_branch_coverage=1 00:06:14.185 --rc genhtml_function_coverage=1 00:06:14.185 --rc genhtml_legend=1 00:06:14.185 --rc geninfo_all_blocks=1 00:06:14.185 --rc geninfo_unexecuted_blocks=1 00:06:14.185 00:06:14.185 ' 00:06:14.185 22:37:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.185 22:37:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:14.185 22:37:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.185 22:37:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.185 ************************************ 00:06:14.185 START TEST skip_rpc 00:06:14.185 ************************************ 00:06:14.185 22:37:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:14.185 22:37:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69145 00:06:14.185 22:37:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:14.185 22:37:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.185 22:37:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:14.185 [2024-12-07 22:37:28.850640] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:14.185 [2024-12-07 22:37:28.850924] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69145 ] 00:06:14.445 [2024-12-07 22:37:28.985664] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.445 [2024-12-07 22:37:29.020213] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.445 [2024-12-07 22:37:29.054800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:19.714 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69145 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69145 ']' 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69145 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69145 00:06:19.715 killing process with pid 69145 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69145' 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69145 00:06:19.715 22:37:33 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69145 00:06:19.715 ************************************ 00:06:19.715 END TEST skip_rpc 00:06:19.715 ************************************ 00:06:19.715 00:06:19.715 real 0m5.283s 00:06:19.715 user 0m5.013s 00:06:19.715 sys 0m0.187s 00:06:19.715 22:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.715 22:37:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.715 22:37:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:19.715 22:37:34 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.715 22:37:34 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.715 22:37:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.715 ************************************ 00:06:19.715 START TEST skip_rpc_with_json 00:06:19.715 ************************************ 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69226 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69226 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69226 ']' 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.715 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.715 [2024-12-07 22:37:34.183334] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:19.715 [2024-12-07 22:37:34.183636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69226 ] 00:06:19.715 [2024-12-07 22:37:34.312932] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.715 [2024-12-07 22:37:34.348143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.715 [2024-12-07 22:37:34.388777] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.974 [2024-12-07 22:37:34.515936] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:19.974 request: 00:06:19.974 { 00:06:19.974 "trtype": "tcp", 00:06:19.974 "method": "nvmf_get_transports", 00:06:19.974 "req_id": 1 00:06:19.974 } 00:06:19.974 Got JSON-RPC error response 00:06:19.974 response: 00:06:19.974 { 00:06:19.974 "code": -19, 00:06:19.974 "message": "No such device" 00:06:19.974 } 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.974 [2024-12-07 22:37:34.528029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.974 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.974 { 00:06:19.974 "subsystems": [ 00:06:19.974 { 00:06:19.974 "subsystem": "fsdev", 00:06:19.974 "config": [ 00:06:19.974 { 00:06:19.975 "method": "fsdev_set_opts", 00:06:19.975 "params": { 00:06:19.975 "fsdev_io_pool_size": 65535, 00:06:19.975 "fsdev_io_cache_size": 256 00:06:19.975 } 00:06:19.975 } 00:06:19.975 ] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "keyring", 00:06:19.975 "config": [] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "iobuf", 00:06:19.975 "config": [ 00:06:19.975 { 00:06:19.975 "method": "iobuf_set_options", 00:06:19.975 "params": { 00:06:19.975 "small_pool_count": 8192, 00:06:19.975 "large_pool_count": 1024, 00:06:19.975 "small_bufsize": 8192, 00:06:19.975 "large_bufsize": 135168 00:06:19.975 } 00:06:19.975 } 00:06:19.975 ] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "sock", 00:06:19.975 "config": [ 00:06:19.975 { 00:06:19.975 "method": "sock_set_default_impl", 00:06:19.975 "params": { 00:06:19.975 "impl_name": "uring" 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "sock_impl_set_options", 00:06:19.975 "params": { 00:06:19.975 "impl_name": "ssl", 00:06:19.975 "recv_buf_size": 4096, 00:06:19.975 "send_buf_size": 4096, 00:06:19.975 "enable_recv_pipe": true, 00:06:19.975 "enable_quickack": false, 00:06:19.975 "enable_placement_id": 0, 00:06:19.975 "enable_zerocopy_send_server": true, 00:06:19.975 "enable_zerocopy_send_client": false, 00:06:19.975 "zerocopy_threshold": 0, 00:06:19.975 "tls_version": 0, 00:06:19.975 "enable_ktls": false 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "sock_impl_set_options", 00:06:19.975 "params": { 00:06:19.975 "impl_name": "posix", 00:06:19.975 "recv_buf_size": 2097152, 00:06:19.975 "send_buf_size": 2097152, 00:06:19.975 "enable_recv_pipe": true, 00:06:19.975 "enable_quickack": false, 00:06:19.975 "enable_placement_id": 0, 00:06:19.975 "enable_zerocopy_send_server": true, 00:06:19.975 "enable_zerocopy_send_client": false, 00:06:19.975 "zerocopy_threshold": 0, 00:06:19.975 "tls_version": 0, 00:06:19.975 "enable_ktls": false 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "sock_impl_set_options", 00:06:19.975 "params": { 00:06:19.975 "impl_name": "uring", 00:06:19.975 "recv_buf_size": 2097152, 00:06:19.975 "send_buf_size": 2097152, 00:06:19.975 "enable_recv_pipe": true, 00:06:19.975 "enable_quickack": false, 00:06:19.975 "enable_placement_id": 0, 00:06:19.975 "enable_zerocopy_send_server": false, 00:06:19.975 "enable_zerocopy_send_client": false, 00:06:19.975 "zerocopy_threshold": 0, 00:06:19.975 "tls_version": 0, 00:06:19.975 "enable_ktls": false 00:06:19.975 } 00:06:19.975 } 00:06:19.975 ] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "vmd", 00:06:19.975 "config": [] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "accel", 00:06:19.975 "config": [ 00:06:19.975 { 00:06:19.975 "method": "accel_set_options", 00:06:19.975 "params": { 00:06:19.975 "small_cache_size": 128, 00:06:19.975 "large_cache_size": 16, 00:06:19.975 "task_count": 2048, 00:06:19.975 "sequence_count": 2048, 00:06:19.975 "buf_count": 2048 00:06:19.975 } 00:06:19.975 } 00:06:19.975 ] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "bdev", 00:06:19.975 "config": [ 00:06:19.975 { 00:06:19.975 "method": "bdev_set_options", 00:06:19.975 "params": { 00:06:19.975 "bdev_io_pool_size": 65535, 00:06:19.975 "bdev_io_cache_size": 256, 00:06:19.975 "bdev_auto_examine": true, 00:06:19.975 "iobuf_small_cache_size": 128, 00:06:19.975 "iobuf_large_cache_size": 16 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "bdev_raid_set_options", 00:06:19.975 "params": { 00:06:19.975 "process_window_size_kb": 1024, 00:06:19.975 "process_max_bandwidth_mb_sec": 0 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "bdev_iscsi_set_options", 00:06:19.975 "params": { 00:06:19.975 "timeout_sec": 30 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "bdev_nvme_set_options", 00:06:19.975 "params": { 00:06:19.975 "action_on_timeout": "none", 00:06:19.975 "timeout_us": 0, 00:06:19.975 "timeout_admin_us": 0, 00:06:19.975 "keep_alive_timeout_ms": 10000, 00:06:19.975 "arbitration_burst": 0, 00:06:19.975 "low_priority_weight": 0, 00:06:19.975 "medium_priority_weight": 0, 00:06:19.975 "high_priority_weight": 0, 00:06:19.975 "nvme_adminq_poll_period_us": 10000, 00:06:19.975 "nvme_ioq_poll_period_us": 0, 00:06:19.975 "io_queue_requests": 0, 00:06:19.975 "delay_cmd_submit": true, 00:06:19.975 "transport_retry_count": 4, 00:06:19.975 "bdev_retry_count": 3, 00:06:19.975 "transport_ack_timeout": 0, 00:06:19.975 "ctrlr_loss_timeout_sec": 0, 00:06:19.975 "reconnect_delay_sec": 0, 00:06:19.975 "fast_io_fail_timeout_sec": 0, 00:06:19.975 "disable_auto_failback": false, 00:06:19.975 "generate_uuids": false, 00:06:19.975 "transport_tos": 0, 00:06:19.975 "nvme_error_stat": false, 00:06:19.975 "rdma_srq_size": 0, 00:06:19.975 "io_path_stat": false, 00:06:19.975 "allow_accel_sequence": false, 00:06:19.975 "rdma_max_cq_size": 0, 00:06:19.975 "rdma_cm_event_timeout_ms": 0, 00:06:19.975 "dhchap_digests": [ 00:06:19.975 "sha256", 00:06:19.975 "sha384", 00:06:19.975 "sha512" 00:06:19.975 ], 00:06:19.975 "dhchap_dhgroups": [ 00:06:19.975 "null", 00:06:19.975 "ffdhe2048", 00:06:19.975 "ffdhe3072", 00:06:19.975 "ffdhe4096", 00:06:19.975 "ffdhe6144", 00:06:19.975 "ffdhe8192" 00:06:19.975 ] 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "bdev_nvme_set_hotplug", 00:06:19.975 "params": { 00:06:19.975 "period_us": 100000, 00:06:19.975 "enable": false 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "bdev_wait_for_examine" 00:06:19.975 } 00:06:19.975 ] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "scsi", 00:06:19.975 "config": null 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "scheduler", 00:06:19.975 "config": [ 00:06:19.975 { 00:06:19.975 "method": "framework_set_scheduler", 00:06:19.975 "params": { 00:06:19.975 "name": "static" 00:06:19.975 } 00:06:19.975 } 00:06:19.975 ] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "vhost_scsi", 00:06:19.975 "config": [] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "vhost_blk", 00:06:19.975 "config": [] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "ublk", 00:06:19.975 "config": [] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "nbd", 00:06:19.975 "config": [] 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "subsystem": "nvmf", 00:06:19.975 "config": [ 00:06:19.975 { 00:06:19.975 "method": "nvmf_set_config", 00:06:19.975 "params": { 00:06:19.975 "discovery_filter": "match_any", 00:06:19.975 "admin_cmd_passthru": { 00:06:19.975 "identify_ctrlr": false 00:06:19.975 }, 00:06:19.975 "dhchap_digests": [ 00:06:19.975 "sha256", 00:06:19.975 "sha384", 00:06:19.975 "sha512" 00:06:19.975 ], 00:06:19.975 "dhchap_dhgroups": [ 00:06:19.975 "null", 00:06:19.975 "ffdhe2048", 00:06:19.975 "ffdhe3072", 00:06:19.975 "ffdhe4096", 00:06:19.975 "ffdhe6144", 00:06:19.975 "ffdhe8192" 00:06:19.975 ] 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "nvmf_set_max_subsystems", 00:06:19.975 "params": { 00:06:19.975 "max_subsystems": 1024 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "nvmf_set_crdt", 00:06:19.975 "params": { 00:06:19.975 "crdt1": 0, 00:06:19.975 "crdt2": 0, 00:06:19.975 "crdt3": 0 00:06:19.975 } 00:06:19.975 }, 00:06:19.975 { 00:06:19.975 "method": "nvmf_create_transport", 00:06:19.975 "params": { 00:06:19.975 "trtype": "TCP", 00:06:19.975 "max_queue_depth": 128, 00:06:19.975 "max_io_qpairs_per_ctrlr": 127, 00:06:19.975 "in_capsule_data_size": 4096, 00:06:19.975 "max_io_size": 131072, 00:06:19.975 "io_unit_size": 131072, 00:06:19.975 "max_aq_depth": 128, 00:06:19.975 "num_shared_buffers": 511, 00:06:19.975 "buf_cache_size": 4294967295, 00:06:19.975 "dif_insert_or_strip": false, 00:06:19.976 "zcopy": false, 00:06:19.976 "c2h_success": true, 00:06:19.976 "sock_priority": 0, 00:06:19.976 "abort_timeout_sec": 1, 00:06:19.976 "ack_timeout": 0, 00:06:19.976 "data_wr_pool_size": 0 00:06:19.976 } 00:06:19.976 } 00:06:19.976 ] 00:06:19.976 }, 00:06:19.976 { 00:06:19.976 "subsystem": "iscsi", 00:06:19.976 "config": [ 00:06:19.976 { 00:06:19.976 "method": "iscsi_set_options", 00:06:19.976 "params": { 00:06:19.976 "node_base": "iqn.2016-06.io.spdk", 00:06:19.976 "max_sessions": 128, 00:06:19.976 "max_connections_per_session": 2, 00:06:19.976 "max_queue_depth": 64, 00:06:19.976 "default_time2wait": 2, 00:06:19.976 "default_time2retain": 20, 00:06:19.976 "first_burst_length": 8192, 00:06:19.976 "immediate_data": true, 00:06:19.976 "allow_duplicated_isid": false, 00:06:19.976 "error_recovery_level": 0, 00:06:19.976 "nop_timeout": 60, 00:06:19.976 "nop_in_interval": 30, 00:06:19.976 "disable_chap": false, 00:06:19.976 "require_chap": false, 00:06:19.976 "mutual_chap": false, 00:06:19.976 "chap_group": 0, 00:06:19.976 "max_large_datain_per_connection": 64, 00:06:19.976 "max_r2t_per_connection": 4, 00:06:19.976 "pdu_pool_size": 36864, 00:06:19.976 "immediate_data_pool_size": 16384, 00:06:19.976 "data_out_pool_size": 2048 00:06:19.976 } 00:06:19.976 } 00:06:19.976 ] 00:06:19.976 } 00:06:19.976 ] 00:06:19.976 } 00:06:19.976 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:19.976 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69226 00:06:19.976 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69226 ']' 00:06:19.976 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69226 00:06:19.976 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:19.976 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:19.976 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69226 00:06:20.234 killing process with pid 69226 00:06:20.234 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.234 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.234 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69226' 00:06:20.235 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69226 00:06:20.235 22:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69226 00:06:20.235 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69246 00:06:20.235 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:20.235 22:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:25.499 22:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69246 00:06:25.499 22:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69246 ']' 00:06:25.499 22:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69246 00:06:25.499 22:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:25.499 22:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:25.499 22:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69246 00:06:25.499 killing process with pid 69246 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69246' 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69246 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69246 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:25.499 00:06:25.499 real 0m6.135s 00:06:25.499 user 0m5.834s 00:06:25.499 sys 0m0.446s 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.499 ************************************ 00:06:25.499 END TEST skip_rpc_with_json 00:06:25.499 ************************************ 00:06:25.499 22:37:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 22:37:40 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:25.758 22:37:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.758 22:37:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.758 22:37:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 ************************************ 00:06:25.758 START TEST skip_rpc_with_delay 00:06:25.758 ************************************ 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:25.758 [2024-12-07 22:37:40.373388] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:25.758 [2024-12-07 22:37:40.373687] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:25.758 00:06:25.758 real 0m0.088s 00:06:25.758 user 0m0.054s 00:06:25.758 sys 0m0.032s 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.758 ************************************ 00:06:25.758 END TEST skip_rpc_with_delay 00:06:25.758 ************************************ 00:06:25.758 22:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 22:37:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:25.758 22:37:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:25.758 22:37:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:25.758 22:37:40 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.758 22:37:40 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.758 22:37:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 ************************************ 00:06:25.758 START TEST exit_on_failed_rpc_init 00:06:25.758 ************************************ 00:06:25.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69356 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69356 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69356 ']' 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.758 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.758 [2024-12-07 22:37:40.513367] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:25.758 [2024-12-07 22:37:40.513455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69356 ] 00:06:26.017 [2024-12-07 22:37:40.653123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.017 [2024-12-07 22:37:40.695316] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.017 [2024-12-07 22:37:40.737360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:26.277 22:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.277 [2024-12-07 22:37:40.940070] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:26.277 [2024-12-07 22:37:40.940191] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69365 ] 00:06:26.537 [2024-12-07 22:37:41.080435] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.537 [2024-12-07 22:37:41.121086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.537 [2024-12-07 22:37:41.121220] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:26.537 [2024-12-07 22:37:41.121238] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:26.537 [2024-12-07 22:37:41.121248] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69356 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69356 ']' 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69356 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69356 00:06:26.537 killing process with pid 69356 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69356' 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69356 00:06:26.537 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69356 00:06:26.797 ************************************ 00:06:26.797 END TEST exit_on_failed_rpc_init 00:06:26.797 ************************************ 00:06:26.797 00:06:26.797 real 0m1.019s 00:06:26.797 user 0m1.176s 00:06:26.797 sys 0m0.291s 00:06:26.797 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.797 22:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:26.797 22:37:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:26.797 00:06:26.797 real 0m12.935s 00:06:26.797 user 0m12.268s 00:06:26.797 sys 0m1.147s 00:06:26.797 22:37:41 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.797 22:37:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.797 ************************************ 00:06:26.797 END TEST skip_rpc 00:06:26.797 ************************************ 00:06:26.797 22:37:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:26.797 22:37:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.797 22:37:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.797 22:37:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.056 ************************************ 00:06:27.056 START TEST rpc_client 00:06:27.056 ************************************ 00:06:27.056 22:37:41 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:27.056 * Looking for test storage... 00:06:27.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:27.056 22:37:41 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.057 22:37:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.057 --rc genhtml_branch_coverage=1 00:06:27.057 --rc genhtml_function_coverage=1 00:06:27.057 --rc genhtml_legend=1 00:06:27.057 --rc geninfo_all_blocks=1 00:06:27.057 --rc geninfo_unexecuted_blocks=1 00:06:27.057 00:06:27.057 ' 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.057 --rc genhtml_branch_coverage=1 00:06:27.057 --rc genhtml_function_coverage=1 00:06:27.057 --rc genhtml_legend=1 00:06:27.057 --rc geninfo_all_blocks=1 00:06:27.057 --rc geninfo_unexecuted_blocks=1 00:06:27.057 00:06:27.057 ' 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.057 --rc genhtml_branch_coverage=1 00:06:27.057 --rc genhtml_function_coverage=1 00:06:27.057 --rc genhtml_legend=1 00:06:27.057 --rc geninfo_all_blocks=1 00:06:27.057 --rc geninfo_unexecuted_blocks=1 00:06:27.057 00:06:27.057 ' 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.057 --rc genhtml_branch_coverage=1 00:06:27.057 --rc genhtml_function_coverage=1 00:06:27.057 --rc genhtml_legend=1 00:06:27.057 --rc geninfo_all_blocks=1 00:06:27.057 --rc geninfo_unexecuted_blocks=1 00:06:27.057 00:06:27.057 ' 00:06:27.057 22:37:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:27.057 OK 00:06:27.057 22:37:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:27.057 00:06:27.057 real 0m0.209s 00:06:27.057 user 0m0.123s 00:06:27.057 sys 0m0.098s 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.057 22:37:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:27.057 ************************************ 00:06:27.057 END TEST rpc_client 00:06:27.057 ************************************ 00:06:27.057 22:37:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:27.057 22:37:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.057 22:37:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.057 22:37:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.318 ************************************ 00:06:27.318 START TEST json_config 00:06:27.318 ************************************ 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.318 22:37:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.318 22:37:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.318 22:37:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.318 22:37:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.318 22:37:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.318 22:37:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.318 22:37:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.318 22:37:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:27.318 22:37:41 json_config -- scripts/common.sh@345 -- # : 1 00:06:27.318 22:37:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.318 22:37:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.318 22:37:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:27.318 22:37:41 json_config -- scripts/common.sh@353 -- # local d=1 00:06:27.318 22:37:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.318 22:37:41 json_config -- scripts/common.sh@355 -- # echo 1 00:06:27.318 22:37:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.318 22:37:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@353 -- # local d=2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.318 22:37:41 json_config -- scripts/common.sh@355 -- # echo 2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.318 22:37:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.318 22:37:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.318 22:37:41 json_config -- scripts/common.sh@368 -- # return 0 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.318 --rc genhtml_branch_coverage=1 00:06:27.318 --rc genhtml_function_coverage=1 00:06:27.318 --rc genhtml_legend=1 00:06:27.318 --rc geninfo_all_blocks=1 00:06:27.318 --rc geninfo_unexecuted_blocks=1 00:06:27.318 00:06:27.318 ' 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.318 --rc genhtml_branch_coverage=1 00:06:27.318 --rc genhtml_function_coverage=1 00:06:27.318 --rc genhtml_legend=1 00:06:27.318 --rc geninfo_all_blocks=1 00:06:27.318 --rc geninfo_unexecuted_blocks=1 00:06:27.318 00:06:27.318 ' 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.318 --rc genhtml_branch_coverage=1 00:06:27.318 --rc genhtml_function_coverage=1 00:06:27.318 --rc genhtml_legend=1 00:06:27.318 --rc geninfo_all_blocks=1 00:06:27.318 --rc geninfo_unexecuted_blocks=1 00:06:27.318 00:06:27.318 ' 00:06:27.318 22:37:41 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.318 --rc genhtml_branch_coverage=1 00:06:27.318 --rc genhtml_function_coverage=1 00:06:27.318 --rc genhtml_legend=1 00:06:27.318 --rc geninfo_all_blocks=1 00:06:27.318 --rc geninfo_unexecuted_blocks=1 00:06:27.318 00:06:27.318 ' 00:06:27.318 22:37:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.318 22:37:41 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.318 22:37:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:27.318 22:37:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.318 22:37:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.318 22:37:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.318 22:37:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.318 22:37:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.318 22:37:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.318 22:37:42 json_config -- paths/export.sh@5 -- # export PATH 00:06:27.318 22:37:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.318 22:37:42 json_config -- nvmf/common.sh@51 -- # : 0 00:06:27.318 22:37:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:27.318 22:37:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:27.318 22:37:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.318 22:37:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.319 22:37:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.319 22:37:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:27.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:27.319 22:37:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:27.319 22:37:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:27.319 22:37:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.319 INFO: JSON configuration test init 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.319 22:37:42 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:27.319 22:37:42 json_config -- json_config/common.sh@9 -- # local app=target 00:06:27.319 22:37:42 json_config -- json_config/common.sh@10 -- # shift 00:06:27.319 22:37:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.319 22:37:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.319 22:37:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.319 22:37:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.319 22:37:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.319 22:37:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69500 00:06:27.319 Waiting for target to run... 00:06:27.319 22:37:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.319 22:37:42 json_config -- json_config/common.sh@25 -- # waitforlisten 69500 /var/tmp/spdk_tgt.sock 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@831 -- # '[' -z 69500 ']' 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.319 22:37:42 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.319 22:37:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.578 [2024-12-07 22:37:42.084546] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:27.578 [2024-12-07 22:37:42.084664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69500 ] 00:06:27.837 [2024-12-07 22:37:42.381925] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.837 [2024-12-07 22:37:42.409329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.415 22:37:43 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.415 22:37:43 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:28.415 00:06:28.415 22:37:43 json_config -- json_config/common.sh@26 -- # echo '' 00:06:28.415 22:37:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:28.415 22:37:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:28.415 22:37:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.415 22:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.415 22:37:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:28.415 22:37:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:28.415 22:37:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.415 22:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.415 22:37:43 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:28.415 22:37:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:28.415 22:37:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:28.982 [2024-12-07 22:37:43.464378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:28.982 22:37:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.982 22:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:28.982 22:37:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:28.982 22:37:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@54 -- # sort 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:29.239 22:37:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:29.239 22:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:29.239 22:37:43 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:29.239 22:37:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:29.239 22:37:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.239 22:37:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:29.496 MallocForNvmf0 00:06:29.496 22:37:44 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.496 22:37:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:29.754 MallocForNvmf1 00:06:29.754 22:37:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:29.754 22:37:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:30.013 [2024-12-07 22:37:44.643504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.013 22:37:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.013 22:37:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:30.271 22:37:44 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.271 22:37:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:30.533 22:37:45 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.533 22:37:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:30.794 22:37:45 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:30.794 22:37:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:31.051 [2024-12-07 22:37:45.659942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:31.051 22:37:45 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:31.051 22:37:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.051 22:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.051 22:37:45 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:31.051 22:37:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.051 22:37:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.051 22:37:45 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:31.051 22:37:45 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.052 22:37:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:31.309 MallocBdevForConfigChangeCheck 00:06:31.309 22:37:46 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:31.309 22:37:46 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:31.309 22:37:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.566 22:37:46 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:31.566 22:37:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:31.823 INFO: shutting down applications... 00:06:31.823 22:37:46 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:31.823 22:37:46 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:31.823 22:37:46 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:31.823 22:37:46 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:31.823 22:37:46 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:32.390 Calling clear_iscsi_subsystem 00:06:32.390 Calling clear_nvmf_subsystem 00:06:32.390 Calling clear_nbd_subsystem 00:06:32.390 Calling clear_ublk_subsystem 00:06:32.390 Calling clear_vhost_blk_subsystem 00:06:32.390 Calling clear_vhost_scsi_subsystem 00:06:32.390 Calling clear_bdev_subsystem 00:06:32.390 22:37:46 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:32.390 22:37:46 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:32.390 22:37:46 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:32.390 22:37:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.390 22:37:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:32.390 22:37:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:32.649 22:37:47 json_config -- json_config/json_config.sh@352 -- # break 00:06:32.649 22:37:47 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:32.649 22:37:47 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:32.649 22:37:47 json_config -- json_config/common.sh@31 -- # local app=target 00:06:32.649 22:37:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:32.649 22:37:47 json_config -- json_config/common.sh@35 -- # [[ -n 69500 ]] 00:06:32.649 22:37:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69500 00:06:32.649 22:37:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:32.649 22:37:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.649 22:37:47 json_config -- json_config/common.sh@41 -- # kill -0 69500 00:06:32.649 22:37:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.216 22:37:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.217 22:37:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.217 22:37:47 json_config -- json_config/common.sh@41 -- # kill -0 69500 00:06:33.217 22:37:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:33.217 SPDK target shutdown done 00:06:33.217 INFO: relaunching applications... 00:06:33.217 22:37:47 json_config -- json_config/common.sh@43 -- # break 00:06:33.217 22:37:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:33.217 22:37:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:33.217 22:37:47 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:33.217 22:37:47 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:33.217 22:37:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:33.217 22:37:47 json_config -- json_config/common.sh@10 -- # shift 00:06:33.217 22:37:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:33.217 22:37:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:33.217 22:37:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:33.217 22:37:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.217 22:37:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:33.217 22:37:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69696 00:06:33.217 22:37:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:33.217 22:37:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:33.217 Waiting for target to run... 00:06:33.217 22:37:47 json_config -- json_config/common.sh@25 -- # waitforlisten 69696 /var/tmp/spdk_tgt.sock 00:06:33.217 22:37:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 69696 ']' 00:06:33.217 22:37:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:33.217 22:37:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.217 22:37:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:33.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:33.217 22:37:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.217 22:37:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:33.217 [2024-12-07 22:37:47.848684] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:33.217 [2024-12-07 22:37:47.849012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69696 ] 00:06:33.476 [2024-12-07 22:37:48.119523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.476 [2024-12-07 22:37:48.140044] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.735 [2024-12-07 22:37:48.267347] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.735 [2024-12-07 22:37:48.455950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.735 [2024-12-07 22:37:48.488045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:34.301 00:06:34.301 INFO: Checking if target configuration is the same... 00:06:34.301 22:37:48 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.301 22:37:48 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:34.301 22:37:48 json_config -- json_config/common.sh@26 -- # echo '' 00:06:34.301 22:37:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:34.301 22:37:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:34.301 22:37:48 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:34.301 22:37:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:34.301 22:37:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.301 + '[' 2 -ne 2 ']' 00:06:34.301 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:34.301 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:34.301 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:34.301 +++ basename /dev/fd/62 00:06:34.301 ++ mktemp /tmp/62.XXX 00:06:34.301 + tmp_file_1=/tmp/62.kAi 00:06:34.301 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:34.301 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:34.301 + tmp_file_2=/tmp/spdk_tgt_config.json.ZeY 00:06:34.301 + ret=0 00:06:34.302 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:34.560 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:34.560 + diff -u /tmp/62.kAi /tmp/spdk_tgt_config.json.ZeY 00:06:34.560 INFO: JSON config files are the same 00:06:34.560 + echo 'INFO: JSON config files are the same' 00:06:34.560 + rm /tmp/62.kAi /tmp/spdk_tgt_config.json.ZeY 00:06:34.560 + exit 0 00:06:34.560 INFO: changing configuration and checking if this can be detected... 00:06:34.560 22:37:49 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:34.560 22:37:49 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:34.560 22:37:49 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.560 22:37:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:34.819 22:37:49 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:34.819 22:37:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:34.819 22:37:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:34.819 + '[' 2 -ne 2 ']' 00:06:35.078 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:35.078 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:35.078 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:35.078 +++ basename /dev/fd/62 00:06:35.078 ++ mktemp /tmp/62.XXX 00:06:35.078 + tmp_file_1=/tmp/62.k3Z 00:06:35.078 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:35.078 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:35.078 + tmp_file_2=/tmp/spdk_tgt_config.json.MRD 00:06:35.078 + ret=0 00:06:35.078 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:35.338 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:35.338 + diff -u /tmp/62.k3Z /tmp/spdk_tgt_config.json.MRD 00:06:35.338 + ret=1 00:06:35.338 + echo '=== Start of file: /tmp/62.k3Z ===' 00:06:35.338 + cat /tmp/62.k3Z 00:06:35.338 + echo '=== End of file: /tmp/62.k3Z ===' 00:06:35.338 + echo '' 00:06:35.338 + echo '=== Start of file: /tmp/spdk_tgt_config.json.MRD ===' 00:06:35.338 + cat /tmp/spdk_tgt_config.json.MRD 00:06:35.338 + echo '=== End of file: /tmp/spdk_tgt_config.json.MRD ===' 00:06:35.338 + echo '' 00:06:35.338 + rm /tmp/62.k3Z /tmp/spdk_tgt_config.json.MRD 00:06:35.338 + exit 1 00:06:35.338 INFO: configuration change detected. 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:35.338 22:37:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.338 22:37:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 69696 ]] 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:35.338 22:37:50 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:35.338 22:37:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:35.338 22:37:50 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:35.338 22:37:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.338 22:37:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.598 22:37:50 json_config -- json_config/json_config.sh@330 -- # killprocess 69696 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@950 -- # '[' -z 69696 ']' 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@954 -- # kill -0 69696 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@955 -- # uname 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69696 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69696' 00:06:35.598 killing process with pid 69696 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@969 -- # kill 69696 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@974 -- # wait 69696 00:06:35.598 22:37:50 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:35.598 22:37:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:35.598 22:37:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.858 INFO: Success 00:06:35.858 22:37:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:35.858 22:37:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:35.858 ************************************ 00:06:35.858 END TEST json_config 00:06:35.858 ************************************ 00:06:35.858 00:06:35.858 real 0m8.541s 00:06:35.858 user 0m12.495s 00:06:35.858 sys 0m1.449s 00:06:35.858 22:37:50 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.858 22:37:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.858 22:37:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:35.858 22:37:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.858 22:37:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.858 22:37:50 -- common/autotest_common.sh@10 -- # set +x 00:06:35.858 ************************************ 00:06:35.858 START TEST json_config_extra_key 00:06:35.858 ************************************ 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.858 --rc genhtml_branch_coverage=1 00:06:35.858 --rc genhtml_function_coverage=1 00:06:35.858 --rc genhtml_legend=1 00:06:35.858 --rc geninfo_all_blocks=1 00:06:35.858 --rc geninfo_unexecuted_blocks=1 00:06:35.858 00:06:35.858 ' 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.858 --rc genhtml_branch_coverage=1 00:06:35.858 --rc genhtml_function_coverage=1 00:06:35.858 --rc genhtml_legend=1 00:06:35.858 --rc geninfo_all_blocks=1 00:06:35.858 --rc geninfo_unexecuted_blocks=1 00:06:35.858 00:06:35.858 ' 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.858 --rc genhtml_branch_coverage=1 00:06:35.858 --rc genhtml_function_coverage=1 00:06:35.858 --rc genhtml_legend=1 00:06:35.858 --rc geninfo_all_blocks=1 00:06:35.858 --rc geninfo_unexecuted_blocks=1 00:06:35.858 00:06:35.858 ' 00:06:35.858 22:37:50 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:35.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.858 --rc genhtml_branch_coverage=1 00:06:35.858 --rc genhtml_function_coverage=1 00:06:35.858 --rc genhtml_legend=1 00:06:35.858 --rc geninfo_all_blocks=1 00:06:35.858 --rc geninfo_unexecuted_blocks=1 00:06:35.858 00:06:35.858 ' 00:06:35.858 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.858 22:37:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.858 22:37:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.859 22:37:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.859 22:37:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.859 22:37:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.859 22:37:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.859 22:37:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:35.859 22:37:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.859 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.859 22:37:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:35.859 INFO: launching applications... 00:06:35.859 22:37:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69849 00:06:35.859 Waiting for target to run... 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:35.859 22:37:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69849 /var/tmp/spdk_tgt.sock 00:06:35.859 22:37:50 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69849 ']' 00:06:35.859 22:37:50 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.859 22:37:50 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.859 22:37:50 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.859 22:37:50 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.859 22:37:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:36.118 [2024-12-07 22:37:50.663689] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:36.118 [2024-12-07 22:37:50.663943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69849 ] 00:06:36.378 [2024-12-07 22:37:50.945729] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.378 [2024-12-07 22:37:50.965558] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.378 [2024-12-07 22:37:50.988888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.947 00:06:36.947 INFO: shutting down applications... 00:06:36.947 22:37:51 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.947 22:37:51 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:36.947 22:37:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:36.947 22:37:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69849 ]] 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69849 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:36.947 22:37:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69849 00:06:36.948 22:37:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:37.516 22:37:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:37.516 22:37:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:37.516 22:37:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69849 00:06:37.516 22:37:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:37.516 22:37:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:37.516 22:37:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:37.516 22:37:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:37.516 SPDK target shutdown done 00:06:37.516 22:37:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:37.516 Success 00:06:37.516 00:06:37.516 real 0m1.791s 00:06:37.516 user 0m1.682s 00:06:37.516 sys 0m0.301s 00:06:37.516 22:37:52 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.516 22:37:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:37.516 ************************************ 00:06:37.516 END TEST json_config_extra_key 00:06:37.516 ************************************ 00:06:37.516 22:37:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.516 22:37:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.516 22:37:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.516 22:37:52 -- common/autotest_common.sh@10 -- # set +x 00:06:37.516 ************************************ 00:06:37.516 START TEST alias_rpc 00:06:37.516 ************************************ 00:06:37.516 22:37:52 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:37.775 * Looking for test storage... 00:06:37.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.775 22:37:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.775 --rc genhtml_branch_coverage=1 00:06:37.775 --rc genhtml_function_coverage=1 00:06:37.775 --rc genhtml_legend=1 00:06:37.775 --rc geninfo_all_blocks=1 00:06:37.775 --rc geninfo_unexecuted_blocks=1 00:06:37.775 00:06:37.775 ' 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.775 --rc genhtml_branch_coverage=1 00:06:37.775 --rc genhtml_function_coverage=1 00:06:37.775 --rc genhtml_legend=1 00:06:37.775 --rc geninfo_all_blocks=1 00:06:37.775 --rc geninfo_unexecuted_blocks=1 00:06:37.775 00:06:37.775 ' 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.775 --rc genhtml_branch_coverage=1 00:06:37.775 --rc genhtml_function_coverage=1 00:06:37.775 --rc genhtml_legend=1 00:06:37.775 --rc geninfo_all_blocks=1 00:06:37.775 --rc geninfo_unexecuted_blocks=1 00:06:37.775 00:06:37.775 ' 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.775 --rc genhtml_branch_coverage=1 00:06:37.775 --rc genhtml_function_coverage=1 00:06:37.775 --rc genhtml_legend=1 00:06:37.775 --rc geninfo_all_blocks=1 00:06:37.775 --rc geninfo_unexecuted_blocks=1 00:06:37.775 00:06:37.775 ' 00:06:37.775 22:37:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:37.775 22:37:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69922 00:06:37.775 22:37:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:37.775 22:37:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69922 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69922 ']' 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.775 22:37:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.775 [2024-12-07 22:37:52.501140] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:37.775 [2024-12-07 22:37:52.501454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69922 ] 00:06:38.034 [2024-12-07 22:37:52.636372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.034 [2024-12-07 22:37:52.672437] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.034 [2024-12-07 22:37:52.708252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:38.293 22:37:52 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.293 22:37:52 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:38.293 22:37:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:38.552 22:37:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69922 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69922 ']' 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69922 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69922 00:06:38.552 killing process with pid 69922 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69922' 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@969 -- # kill 69922 00:06:38.552 22:37:53 alias_rpc -- common/autotest_common.sh@974 -- # wait 69922 00:06:38.812 ************************************ 00:06:38.812 END TEST alias_rpc 00:06:38.812 ************************************ 00:06:38.812 00:06:38.812 real 0m1.141s 00:06:38.812 user 0m1.349s 00:06:38.812 sys 0m0.298s 00:06:38.812 22:37:53 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.812 22:37:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.812 22:37:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:38.812 22:37:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:38.812 22:37:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:38.812 22:37:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.812 22:37:53 -- common/autotest_common.sh@10 -- # set +x 00:06:38.812 ************************************ 00:06:38.812 START TEST spdkcli_tcp 00:06:38.812 ************************************ 00:06:38.812 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:38.812 * Looking for test storage... 00:06:38.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:38.812 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:38.812 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:38.812 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.135 22:37:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:39.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.135 --rc genhtml_branch_coverage=1 00:06:39.135 --rc genhtml_function_coverage=1 00:06:39.135 --rc genhtml_legend=1 00:06:39.135 --rc geninfo_all_blocks=1 00:06:39.135 --rc geninfo_unexecuted_blocks=1 00:06:39.135 00:06:39.135 ' 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:39.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.135 --rc genhtml_branch_coverage=1 00:06:39.135 --rc genhtml_function_coverage=1 00:06:39.135 --rc genhtml_legend=1 00:06:39.135 --rc geninfo_all_blocks=1 00:06:39.135 --rc geninfo_unexecuted_blocks=1 00:06:39.135 00:06:39.135 ' 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:39.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.135 --rc genhtml_branch_coverage=1 00:06:39.135 --rc genhtml_function_coverage=1 00:06:39.135 --rc genhtml_legend=1 00:06:39.135 --rc geninfo_all_blocks=1 00:06:39.135 --rc geninfo_unexecuted_blocks=1 00:06:39.135 00:06:39.135 ' 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:39.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.135 --rc genhtml_branch_coverage=1 00:06:39.135 --rc genhtml_function_coverage=1 00:06:39.135 --rc genhtml_legend=1 00:06:39.135 --rc geninfo_all_blocks=1 00:06:39.135 --rc geninfo_unexecuted_blocks=1 00:06:39.135 00:06:39.135 ' 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69993 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:39.135 22:37:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69993 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 69993 ']' 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.135 22:37:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:39.135 [2024-12-07 22:37:53.712234] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:39.135 [2024-12-07 22:37:53.712521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69993 ] 00:06:39.135 [2024-12-07 22:37:53.853202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.409 [2024-12-07 22:37:53.896037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.409 [2024-12-07 22:37:53.896046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.409 [2024-12-07 22:37:53.934091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.973 22:37:54 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:39.973 22:37:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:39.973 22:37:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70010 00:06:39.973 22:37:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:39.973 22:37:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:40.231 [ 00:06:40.231 "bdev_malloc_delete", 00:06:40.231 "bdev_malloc_create", 00:06:40.231 "bdev_null_resize", 00:06:40.231 "bdev_null_delete", 00:06:40.231 "bdev_null_create", 00:06:40.231 "bdev_nvme_cuse_unregister", 00:06:40.231 "bdev_nvme_cuse_register", 00:06:40.231 "bdev_opal_new_user", 00:06:40.231 "bdev_opal_set_lock_state", 00:06:40.231 "bdev_opal_delete", 00:06:40.231 "bdev_opal_get_info", 00:06:40.231 "bdev_opal_create", 00:06:40.231 "bdev_nvme_opal_revert", 00:06:40.231 "bdev_nvme_opal_init", 00:06:40.231 "bdev_nvme_send_cmd", 00:06:40.231 "bdev_nvme_set_keys", 00:06:40.231 "bdev_nvme_get_path_iostat", 00:06:40.231 "bdev_nvme_get_mdns_discovery_info", 00:06:40.231 "bdev_nvme_stop_mdns_discovery", 00:06:40.231 "bdev_nvme_start_mdns_discovery", 00:06:40.231 "bdev_nvme_set_multipath_policy", 00:06:40.231 "bdev_nvme_set_preferred_path", 00:06:40.231 "bdev_nvme_get_io_paths", 00:06:40.231 "bdev_nvme_remove_error_injection", 00:06:40.231 "bdev_nvme_add_error_injection", 00:06:40.231 "bdev_nvme_get_discovery_info", 00:06:40.231 "bdev_nvme_stop_discovery", 00:06:40.231 "bdev_nvme_start_discovery", 00:06:40.231 "bdev_nvme_get_controller_health_info", 00:06:40.231 "bdev_nvme_disable_controller", 00:06:40.231 "bdev_nvme_enable_controller", 00:06:40.231 "bdev_nvme_reset_controller", 00:06:40.231 "bdev_nvme_get_transport_statistics", 00:06:40.231 "bdev_nvme_apply_firmware", 00:06:40.231 "bdev_nvme_detach_controller", 00:06:40.231 "bdev_nvme_get_controllers", 00:06:40.231 "bdev_nvme_attach_controller", 00:06:40.231 "bdev_nvme_set_hotplug", 00:06:40.231 "bdev_nvme_set_options", 00:06:40.231 "bdev_passthru_delete", 00:06:40.231 "bdev_passthru_create", 00:06:40.231 "bdev_lvol_set_parent_bdev", 00:06:40.231 "bdev_lvol_set_parent", 00:06:40.231 "bdev_lvol_check_shallow_copy", 00:06:40.231 "bdev_lvol_start_shallow_copy", 00:06:40.231 "bdev_lvol_grow_lvstore", 00:06:40.231 "bdev_lvol_get_lvols", 00:06:40.231 "bdev_lvol_get_lvstores", 00:06:40.231 "bdev_lvol_delete", 00:06:40.231 "bdev_lvol_set_read_only", 00:06:40.231 "bdev_lvol_resize", 00:06:40.231 "bdev_lvol_decouple_parent", 00:06:40.231 "bdev_lvol_inflate", 00:06:40.231 "bdev_lvol_rename", 00:06:40.231 "bdev_lvol_clone_bdev", 00:06:40.231 "bdev_lvol_clone", 00:06:40.231 "bdev_lvol_snapshot", 00:06:40.231 "bdev_lvol_create", 00:06:40.231 "bdev_lvol_delete_lvstore", 00:06:40.231 "bdev_lvol_rename_lvstore", 00:06:40.231 "bdev_lvol_create_lvstore", 00:06:40.231 "bdev_raid_set_options", 00:06:40.231 "bdev_raid_remove_base_bdev", 00:06:40.231 "bdev_raid_add_base_bdev", 00:06:40.231 "bdev_raid_delete", 00:06:40.231 "bdev_raid_create", 00:06:40.231 "bdev_raid_get_bdevs", 00:06:40.231 "bdev_error_inject_error", 00:06:40.231 "bdev_error_delete", 00:06:40.231 "bdev_error_create", 00:06:40.231 "bdev_split_delete", 00:06:40.231 "bdev_split_create", 00:06:40.231 "bdev_delay_delete", 00:06:40.231 "bdev_delay_create", 00:06:40.231 "bdev_delay_update_latency", 00:06:40.231 "bdev_zone_block_delete", 00:06:40.231 "bdev_zone_block_create", 00:06:40.231 "blobfs_create", 00:06:40.231 "blobfs_detect", 00:06:40.231 "blobfs_set_cache_size", 00:06:40.231 "bdev_aio_delete", 00:06:40.231 "bdev_aio_rescan", 00:06:40.231 "bdev_aio_create", 00:06:40.231 "bdev_ftl_set_property", 00:06:40.231 "bdev_ftl_get_properties", 00:06:40.231 "bdev_ftl_get_stats", 00:06:40.231 "bdev_ftl_unmap", 00:06:40.231 "bdev_ftl_unload", 00:06:40.231 "bdev_ftl_delete", 00:06:40.231 "bdev_ftl_load", 00:06:40.231 "bdev_ftl_create", 00:06:40.231 "bdev_virtio_attach_controller", 00:06:40.231 "bdev_virtio_scsi_get_devices", 00:06:40.231 "bdev_virtio_detach_controller", 00:06:40.231 "bdev_virtio_blk_set_hotplug", 00:06:40.231 "bdev_iscsi_delete", 00:06:40.231 "bdev_iscsi_create", 00:06:40.231 "bdev_iscsi_set_options", 00:06:40.231 "bdev_uring_delete", 00:06:40.231 "bdev_uring_rescan", 00:06:40.231 "bdev_uring_create", 00:06:40.231 "accel_error_inject_error", 00:06:40.231 "ioat_scan_accel_module", 00:06:40.231 "dsa_scan_accel_module", 00:06:40.231 "iaa_scan_accel_module", 00:06:40.231 "keyring_file_remove_key", 00:06:40.231 "keyring_file_add_key", 00:06:40.231 "keyring_linux_set_options", 00:06:40.231 "fsdev_aio_delete", 00:06:40.231 "fsdev_aio_create", 00:06:40.231 "iscsi_get_histogram", 00:06:40.231 "iscsi_enable_histogram", 00:06:40.231 "iscsi_set_options", 00:06:40.231 "iscsi_get_auth_groups", 00:06:40.231 "iscsi_auth_group_remove_secret", 00:06:40.231 "iscsi_auth_group_add_secret", 00:06:40.231 "iscsi_delete_auth_group", 00:06:40.231 "iscsi_create_auth_group", 00:06:40.231 "iscsi_set_discovery_auth", 00:06:40.231 "iscsi_get_options", 00:06:40.231 "iscsi_target_node_request_logout", 00:06:40.231 "iscsi_target_node_set_redirect", 00:06:40.231 "iscsi_target_node_set_auth", 00:06:40.231 "iscsi_target_node_add_lun", 00:06:40.231 "iscsi_get_stats", 00:06:40.231 "iscsi_get_connections", 00:06:40.231 "iscsi_portal_group_set_auth", 00:06:40.231 "iscsi_start_portal_group", 00:06:40.231 "iscsi_delete_portal_group", 00:06:40.231 "iscsi_create_portal_group", 00:06:40.231 "iscsi_get_portal_groups", 00:06:40.231 "iscsi_delete_target_node", 00:06:40.231 "iscsi_target_node_remove_pg_ig_maps", 00:06:40.231 "iscsi_target_node_add_pg_ig_maps", 00:06:40.231 "iscsi_create_target_node", 00:06:40.231 "iscsi_get_target_nodes", 00:06:40.231 "iscsi_delete_initiator_group", 00:06:40.231 "iscsi_initiator_group_remove_initiators", 00:06:40.231 "iscsi_initiator_group_add_initiators", 00:06:40.231 "iscsi_create_initiator_group", 00:06:40.231 "iscsi_get_initiator_groups", 00:06:40.231 "nvmf_set_crdt", 00:06:40.231 "nvmf_set_config", 00:06:40.231 "nvmf_set_max_subsystems", 00:06:40.231 "nvmf_stop_mdns_prr", 00:06:40.231 "nvmf_publish_mdns_prr", 00:06:40.231 "nvmf_subsystem_get_listeners", 00:06:40.231 "nvmf_subsystem_get_qpairs", 00:06:40.231 "nvmf_subsystem_get_controllers", 00:06:40.231 "nvmf_get_stats", 00:06:40.231 "nvmf_get_transports", 00:06:40.231 "nvmf_create_transport", 00:06:40.231 "nvmf_get_targets", 00:06:40.231 "nvmf_delete_target", 00:06:40.231 "nvmf_create_target", 00:06:40.231 "nvmf_subsystem_allow_any_host", 00:06:40.231 "nvmf_subsystem_set_keys", 00:06:40.231 "nvmf_subsystem_remove_host", 00:06:40.231 "nvmf_subsystem_add_host", 00:06:40.231 "nvmf_ns_remove_host", 00:06:40.231 "nvmf_ns_add_host", 00:06:40.231 "nvmf_subsystem_remove_ns", 00:06:40.231 "nvmf_subsystem_set_ns_ana_group", 00:06:40.231 "nvmf_subsystem_add_ns", 00:06:40.231 "nvmf_subsystem_listener_set_ana_state", 00:06:40.231 "nvmf_discovery_get_referrals", 00:06:40.231 "nvmf_discovery_remove_referral", 00:06:40.231 "nvmf_discovery_add_referral", 00:06:40.231 "nvmf_subsystem_remove_listener", 00:06:40.231 "nvmf_subsystem_add_listener", 00:06:40.231 "nvmf_delete_subsystem", 00:06:40.231 "nvmf_create_subsystem", 00:06:40.231 "nvmf_get_subsystems", 00:06:40.231 "env_dpdk_get_mem_stats", 00:06:40.231 "nbd_get_disks", 00:06:40.231 "nbd_stop_disk", 00:06:40.231 "nbd_start_disk", 00:06:40.231 "ublk_recover_disk", 00:06:40.231 "ublk_get_disks", 00:06:40.231 "ublk_stop_disk", 00:06:40.231 "ublk_start_disk", 00:06:40.231 "ublk_destroy_target", 00:06:40.231 "ublk_create_target", 00:06:40.231 "virtio_blk_create_transport", 00:06:40.231 "virtio_blk_get_transports", 00:06:40.231 "vhost_controller_set_coalescing", 00:06:40.231 "vhost_get_controllers", 00:06:40.231 "vhost_delete_controller", 00:06:40.231 "vhost_create_blk_controller", 00:06:40.231 "vhost_scsi_controller_remove_target", 00:06:40.231 "vhost_scsi_controller_add_target", 00:06:40.231 "vhost_start_scsi_controller", 00:06:40.231 "vhost_create_scsi_controller", 00:06:40.231 "thread_set_cpumask", 00:06:40.231 "scheduler_set_options", 00:06:40.231 "framework_get_governor", 00:06:40.231 "framework_get_scheduler", 00:06:40.231 "framework_set_scheduler", 00:06:40.231 "framework_get_reactors", 00:06:40.231 "thread_get_io_channels", 00:06:40.231 "thread_get_pollers", 00:06:40.231 "thread_get_stats", 00:06:40.231 "framework_monitor_context_switch", 00:06:40.231 "spdk_kill_instance", 00:06:40.231 "log_enable_timestamps", 00:06:40.231 "log_get_flags", 00:06:40.231 "log_clear_flag", 00:06:40.231 "log_set_flag", 00:06:40.231 "log_get_level", 00:06:40.231 "log_set_level", 00:06:40.231 "log_get_print_level", 00:06:40.231 "log_set_print_level", 00:06:40.231 "framework_enable_cpumask_locks", 00:06:40.231 "framework_disable_cpumask_locks", 00:06:40.231 "framework_wait_init", 00:06:40.231 "framework_start_init", 00:06:40.231 "scsi_get_devices", 00:06:40.231 "bdev_get_histogram", 00:06:40.231 "bdev_enable_histogram", 00:06:40.231 "bdev_set_qos_limit", 00:06:40.231 "bdev_set_qd_sampling_period", 00:06:40.231 "bdev_get_bdevs", 00:06:40.231 "bdev_reset_iostat", 00:06:40.231 "bdev_get_iostat", 00:06:40.231 "bdev_examine", 00:06:40.231 "bdev_wait_for_examine", 00:06:40.231 "bdev_set_options", 00:06:40.231 "accel_get_stats", 00:06:40.231 "accel_set_options", 00:06:40.231 "accel_set_driver", 00:06:40.231 "accel_crypto_key_destroy", 00:06:40.231 "accel_crypto_keys_get", 00:06:40.231 "accel_crypto_key_create", 00:06:40.231 "accel_assign_opc", 00:06:40.231 "accel_get_module_info", 00:06:40.231 "accel_get_opc_assignments", 00:06:40.231 "vmd_rescan", 00:06:40.231 "vmd_remove_device", 00:06:40.231 "vmd_enable", 00:06:40.231 "sock_get_default_impl", 00:06:40.231 "sock_set_default_impl", 00:06:40.231 "sock_impl_set_options", 00:06:40.231 "sock_impl_get_options", 00:06:40.231 "iobuf_get_stats", 00:06:40.231 "iobuf_set_options", 00:06:40.231 "keyring_get_keys", 00:06:40.231 "framework_get_pci_devices", 00:06:40.231 "framework_get_config", 00:06:40.231 "framework_get_subsystems", 00:06:40.231 "fsdev_set_opts", 00:06:40.231 "fsdev_get_opts", 00:06:40.231 "trace_get_info", 00:06:40.231 "trace_get_tpoint_group_mask", 00:06:40.231 "trace_disable_tpoint_group", 00:06:40.231 "trace_enable_tpoint_group", 00:06:40.231 "trace_clear_tpoint_mask", 00:06:40.231 "trace_set_tpoint_mask", 00:06:40.231 "notify_get_notifications", 00:06:40.231 "notify_get_types", 00:06:40.231 "spdk_get_version", 00:06:40.231 "rpc_get_methods" 00:06:40.231 ] 00:06:40.231 22:37:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:40.231 22:37:54 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:40.231 22:37:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.490 22:37:55 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:40.490 22:37:55 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69993 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 69993 ']' 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 69993 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69993 00:06:40.490 killing process with pid 69993 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69993' 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 69993 00:06:40.490 22:37:55 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 69993 00:06:40.748 ************************************ 00:06:40.748 END TEST spdkcli_tcp 00:06:40.748 ************************************ 00:06:40.748 00:06:40.748 real 0m1.829s 00:06:40.748 user 0m3.486s 00:06:40.748 sys 0m0.412s 00:06:40.748 22:37:55 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:40.748 22:37:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:40.748 22:37:55 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.748 22:37:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:40.748 22:37:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:40.748 22:37:55 -- common/autotest_common.sh@10 -- # set +x 00:06:40.748 ************************************ 00:06:40.748 START TEST dpdk_mem_utility 00:06:40.748 ************************************ 00:06:40.748 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:40.748 * Looking for test storage... 00:06:40.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:40.748 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:40.748 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:40.748 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:40.748 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:40.748 22:37:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.749 22:37:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.749 --rc genhtml_branch_coverage=1 00:06:40.749 --rc genhtml_function_coverage=1 00:06:40.749 --rc genhtml_legend=1 00:06:40.749 --rc geninfo_all_blocks=1 00:06:40.749 --rc geninfo_unexecuted_blocks=1 00:06:40.749 00:06:40.749 ' 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.749 --rc genhtml_branch_coverage=1 00:06:40.749 --rc genhtml_function_coverage=1 00:06:40.749 --rc genhtml_legend=1 00:06:40.749 --rc geninfo_all_blocks=1 00:06:40.749 --rc geninfo_unexecuted_blocks=1 00:06:40.749 00:06:40.749 ' 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.749 --rc genhtml_branch_coverage=1 00:06:40.749 --rc genhtml_function_coverage=1 00:06:40.749 --rc genhtml_legend=1 00:06:40.749 --rc geninfo_all_blocks=1 00:06:40.749 --rc geninfo_unexecuted_blocks=1 00:06:40.749 00:06:40.749 ' 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.749 --rc genhtml_branch_coverage=1 00:06:40.749 --rc genhtml_function_coverage=1 00:06:40.749 --rc genhtml_legend=1 00:06:40.749 --rc geninfo_all_blocks=1 00:06:40.749 --rc geninfo_unexecuted_blocks=1 00:06:40.749 00:06:40.749 ' 00:06:40.749 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:40.749 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70092 00:06:40.749 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:40.749 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70092 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70092 ']' 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.749 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.007 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.007 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.007 [2024-12-07 22:37:55.569183] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:41.007 [2024-12-07 22:37:55.569971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70092 ] 00:06:41.007 [2024-12-07 22:37:55.706939] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.007 [2024-12-07 22:37:55.743299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.265 [2024-12-07 22:37:55.780457] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.265 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.265 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:41.265 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:41.265 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:41.265 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.265 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.265 { 00:06:41.265 "filename": "/tmp/spdk_mem_dump.txt" 00:06:41.265 } 00:06:41.265 22:37:55 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.265 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:41.265 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:41.265 1 heaps totaling size 860.000000 MiB 00:06:41.265 size: 860.000000 MiB heap id: 0 00:06:41.265 end heaps---------- 00:06:41.265 9 mempools totaling size 642.649841 MiB 00:06:41.265 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:41.265 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:41.265 size: 92.545471 MiB name: bdev_io_70092 00:06:41.265 size: 51.011292 MiB name: evtpool_70092 00:06:41.265 size: 50.003479 MiB name: msgpool_70092 00:06:41.265 size: 36.509338 MiB name: fsdev_io_70092 00:06:41.265 size: 21.763794 MiB name: PDU_Pool 00:06:41.265 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:41.265 size: 0.026123 MiB name: Session_Pool 00:06:41.265 end mempools------- 00:06:41.265 6 memzones totaling size 4.142822 MiB 00:06:41.265 size: 1.000366 MiB name: RG_ring_0_70092 00:06:41.265 size: 1.000366 MiB name: RG_ring_1_70092 00:06:41.265 size: 1.000366 MiB name: RG_ring_4_70092 00:06:41.265 size: 1.000366 MiB name: RG_ring_5_70092 00:06:41.265 size: 0.125366 MiB name: RG_ring_2_70092 00:06:41.265 size: 0.015991 MiB name: RG_ring_3_70092 00:06:41.265 end memzones------- 00:06:41.265 22:37:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:41.538 heap id: 0 total size: 860.000000 MiB number of busy elements: 325 number of free elements: 16 00:06:41.538 list of free elements. size: 13.933228 MiB 00:06:41.538 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:41.538 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:41.538 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:41.538 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:41.538 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:41.538 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:41.538 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:41.538 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:41.538 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:41.538 element at address: 0x20001d800000 with size: 0.566589 MiB 00:06:41.538 element at address: 0x20000d800000 with size: 0.489258 MiB 00:06:41.538 element at address: 0x200003e00000 with size: 0.487183 MiB 00:06:41.538 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:41.538 element at address: 0x200007000000 with size: 0.480286 MiB 00:06:41.538 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:41.538 element at address: 0x200003a00000 with size: 0.352112 MiB 00:06:41.538 list of standard malloc elements. size: 199.270081 MiB 00:06:41.538 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:41.538 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:41.538 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:41.538 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:41.538 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:41.538 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:41.538 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:41.538 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:41.538 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:41.538 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003e7cb80 with size: 0.000183 MiB 00:06:41.538 element at address: 0x200003e7cc40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7cd00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7cdc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7ce80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7cf40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d000 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d0c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d180 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d240 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d300 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d3c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d480 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707af40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:41.539 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d400 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8910c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891180 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891240 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:41.539 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:41.539 list of memzone associated elements. size: 646.796692 MiB 00:06:41.539 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:41.539 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:41.539 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:41.539 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:41.539 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:41.539 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70092_0 00:06:41.539 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:41.539 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70092_0 00:06:41.539 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:41.539 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70092_0 00:06:41.539 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:41.539 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70092_0 00:06:41.539 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:41.539 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:41.539 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:41.539 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:41.539 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:41.539 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70092 00:06:41.539 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:41.539 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70092 00:06:41.539 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:41.539 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70092 00:06:41.539 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:41.539 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:41.539 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:41.539 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:41.539 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:41.539 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:41.539 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:41.539 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:41.539 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:41.539 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70092 00:06:41.539 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:41.539 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70092 00:06:41.539 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:41.539 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70092 00:06:41.539 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:41.539 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70092 00:06:41.539 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:41.539 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70092 00:06:41.539 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:41.539 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70092 00:06:41.539 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:41.539 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:41.540 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:41.540 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:41.540 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:41.540 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:41.540 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:06:41.540 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70092 00:06:41.540 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:41.540 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:41.540 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:41.540 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:41.540 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:06:41.540 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70092 00:06:41.540 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:41.540 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:41.540 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:41.540 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70092 00:06:41.540 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:41.540 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70092 00:06:41.540 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:06:41.540 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70092 00:06:41.540 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:41.540 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:41.540 22:37:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:41.540 22:37:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70092 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70092 ']' 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70092 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70092 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70092' 00:06:41.540 killing process with pid 70092 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70092 00:06:41.540 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70092 00:06:41.799 00:06:41.799 real 0m0.984s 00:06:41.799 user 0m1.021s 00:06:41.799 sys 0m0.327s 00:06:41.799 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.799 ************************************ 00:06:41.799 END TEST dpdk_mem_utility 00:06:41.799 ************************************ 00:06:41.799 22:37:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:41.799 22:37:56 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:41.799 22:37:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.799 22:37:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.799 22:37:56 -- common/autotest_common.sh@10 -- # set +x 00:06:41.799 ************************************ 00:06:41.799 START TEST event 00:06:41.799 ************************************ 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:41.799 * Looking for test storage... 00:06:41.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:41.799 22:37:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.799 22:37:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.799 22:37:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.799 22:37:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.799 22:37:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.799 22:37:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.799 22:37:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.799 22:37:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.799 22:37:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.799 22:37:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.799 22:37:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.799 22:37:56 event -- scripts/common.sh@344 -- # case "$op" in 00:06:41.799 22:37:56 event -- scripts/common.sh@345 -- # : 1 00:06:41.799 22:37:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.799 22:37:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.799 22:37:56 event -- scripts/common.sh@365 -- # decimal 1 00:06:41.799 22:37:56 event -- scripts/common.sh@353 -- # local d=1 00:06:41.799 22:37:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.799 22:37:56 event -- scripts/common.sh@355 -- # echo 1 00:06:41.799 22:37:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.799 22:37:56 event -- scripts/common.sh@366 -- # decimal 2 00:06:41.799 22:37:56 event -- scripts/common.sh@353 -- # local d=2 00:06:41.799 22:37:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.799 22:37:56 event -- scripts/common.sh@355 -- # echo 2 00:06:41.799 22:37:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.799 22:37:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.799 22:37:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.799 22:37:56 event -- scripts/common.sh@368 -- # return 0 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:41.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.799 --rc genhtml_branch_coverage=1 00:06:41.799 --rc genhtml_function_coverage=1 00:06:41.799 --rc genhtml_legend=1 00:06:41.799 --rc geninfo_all_blocks=1 00:06:41.799 --rc geninfo_unexecuted_blocks=1 00:06:41.799 00:06:41.799 ' 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:41.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.799 --rc genhtml_branch_coverage=1 00:06:41.799 --rc genhtml_function_coverage=1 00:06:41.799 --rc genhtml_legend=1 00:06:41.799 --rc geninfo_all_blocks=1 00:06:41.799 --rc geninfo_unexecuted_blocks=1 00:06:41.799 00:06:41.799 ' 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:41.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.799 --rc genhtml_branch_coverage=1 00:06:41.799 --rc genhtml_function_coverage=1 00:06:41.799 --rc genhtml_legend=1 00:06:41.799 --rc geninfo_all_blocks=1 00:06:41.799 --rc geninfo_unexecuted_blocks=1 00:06:41.799 00:06:41.799 ' 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:41.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.799 --rc genhtml_branch_coverage=1 00:06:41.799 --rc genhtml_function_coverage=1 00:06:41.799 --rc genhtml_legend=1 00:06:41.799 --rc geninfo_all_blocks=1 00:06:41.799 --rc geninfo_unexecuted_blocks=1 00:06:41.799 00:06:41.799 ' 00:06:41.799 22:37:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:41.799 22:37:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:41.799 22:37:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:41.799 22:37:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.799 22:37:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.799 ************************************ 00:06:41.799 START TEST event_perf 00:06:41.799 ************************************ 00:06:41.800 22:37:56 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:42.059 Running I/O for 1 seconds...[2024-12-07 22:37:56.574526] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:42.059 [2024-12-07 22:37:56.574794] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70164 ] 00:06:42.059 [2024-12-07 22:37:56.711085] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.059 [2024-12-07 22:37:56.746827] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.059 [2024-12-07 22:37:56.746959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.059 [2024-12-07 22:37:56.747009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.059 [2024-12-07 22:37:56.747010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.434 Running I/O for 1 seconds... 00:06:43.434 lcore 0: 200164 00:06:43.434 lcore 1: 200163 00:06:43.434 lcore 2: 200164 00:06:43.434 lcore 3: 200164 00:06:43.434 done. 00:06:43.434 ************************************ 00:06:43.434 END TEST event_perf 00:06:43.434 ************************************ 00:06:43.434 00:06:43.434 real 0m1.246s 00:06:43.434 user 0m4.077s 00:06:43.434 sys 0m0.049s 00:06:43.434 22:37:57 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.434 22:37:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:43.435 22:37:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:43.435 22:37:57 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:43.435 22:37:57 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.435 22:37:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.435 ************************************ 00:06:43.435 START TEST event_reactor 00:06:43.435 ************************************ 00:06:43.435 22:37:57 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:43.435 [2024-12-07 22:37:57.871290] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:43.435 [2024-12-07 22:37:57.871528] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70203 ] 00:06:43.435 [2024-12-07 22:37:57.999360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.435 [2024-12-07 22:37:58.034724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.371 test_start 00:06:44.371 oneshot 00:06:44.371 tick 100 00:06:44.371 tick 100 00:06:44.371 tick 250 00:06:44.371 tick 100 00:06:44.371 tick 100 00:06:44.371 tick 100 00:06:44.371 tick 250 00:06:44.371 tick 500 00:06:44.371 tick 100 00:06:44.371 tick 100 00:06:44.371 tick 250 00:06:44.371 tick 100 00:06:44.371 tick 100 00:06:44.371 test_end 00:06:44.371 ************************************ 00:06:44.371 END TEST event_reactor 00:06:44.371 ************************************ 00:06:44.371 00:06:44.371 real 0m1.227s 00:06:44.371 user 0m1.087s 00:06:44.371 sys 0m0.035s 00:06:44.371 22:37:59 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.371 22:37:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:44.371 22:37:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.371 22:37:59 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:44.371 22:37:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.371 22:37:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.371 ************************************ 00:06:44.371 START TEST event_reactor_perf 00:06:44.371 ************************************ 00:06:44.371 22:37:59 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:44.629 [2024-12-07 22:37:59.146346] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:44.629 [2024-12-07 22:37:59.146430] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70233 ] 00:06:44.630 [2024-12-07 22:37:59.274483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.630 [2024-12-07 22:37:59.311123] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.004 test_start 00:06:46.004 test_end 00:06:46.004 Performance: 451947 events per second 00:06:46.004 00:06:46.004 real 0m1.231s 00:06:46.004 user 0m1.087s 00:06:46.004 sys 0m0.039s 00:06:46.004 22:38:00 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:46.004 ************************************ 00:06:46.004 END TEST event_reactor_perf 00:06:46.004 ************************************ 00:06:46.004 22:38:00 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.004 22:38:00 event -- event/event.sh@49 -- # uname -s 00:06:46.004 22:38:00 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:46.004 22:38:00 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:46.004 22:38:00 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.004 22:38:00 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.004 22:38:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:46.004 ************************************ 00:06:46.004 START TEST event_scheduler 00:06:46.004 ************************************ 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:46.004 * Looking for test storage... 00:06:46.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.004 22:38:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.004 --rc genhtml_branch_coverage=1 00:06:46.004 --rc genhtml_function_coverage=1 00:06:46.004 --rc genhtml_legend=1 00:06:46.004 --rc geninfo_all_blocks=1 00:06:46.004 --rc geninfo_unexecuted_blocks=1 00:06:46.004 00:06:46.004 ' 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.004 --rc genhtml_branch_coverage=1 00:06:46.004 --rc genhtml_function_coverage=1 00:06:46.004 --rc genhtml_legend=1 00:06:46.004 --rc geninfo_all_blocks=1 00:06:46.004 --rc geninfo_unexecuted_blocks=1 00:06:46.004 00:06:46.004 ' 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.004 --rc genhtml_branch_coverage=1 00:06:46.004 --rc genhtml_function_coverage=1 00:06:46.004 --rc genhtml_legend=1 00:06:46.004 --rc geninfo_all_blocks=1 00:06:46.004 --rc geninfo_unexecuted_blocks=1 00:06:46.004 00:06:46.004 ' 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.004 --rc genhtml_branch_coverage=1 00:06:46.004 --rc genhtml_function_coverage=1 00:06:46.004 --rc genhtml_legend=1 00:06:46.004 --rc geninfo_all_blocks=1 00:06:46.004 --rc geninfo_unexecuted_blocks=1 00:06:46.004 00:06:46.004 ' 00:06:46.004 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:46.004 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70302 00:06:46.004 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:46.004 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70302 00:06:46.004 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70302 ']' 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.004 22:38:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.004 [2024-12-07 22:38:00.655924] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:46.004 [2024-12-07 22:38:00.656033] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70302 ] 00:06:46.264 [2024-12-07 22:38:00.795046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.264 [2024-12-07 22:38:00.838831] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.264 [2024-12-07 22:38:00.838967] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.264 [2024-12-07 22:38:00.840198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.264 [2024-12-07 22:38:00.840259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:46.264 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.264 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.264 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.264 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.264 POWER: Cannot set governor of lcore 0 to performance 00:06:46.264 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:46.264 POWER: Cannot set governor of lcore 0 to userspace 00:06:46.264 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:46.264 POWER: Unable to set Power Management Environment for lcore 0 00:06:46.264 [2024-12-07 22:38:00.917682] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:46.264 [2024-12-07 22:38:00.917696] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:46.264 [2024-12-07 22:38:00.917722] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:46.264 [2024-12-07 22:38:00.917740] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:46.264 [2024-12-07 22:38:00.917749] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:46.264 [2024-12-07 22:38:00.917758] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.264 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.264 [2024-12-07 22:38:00.957130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.264 [2024-12-07 22:38:00.974008] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.264 22:38:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:46.264 22:38:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:46.264 ************************************ 00:06:46.264 START TEST scheduler_create_thread 00:06:46.264 ************************************ 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.264 2 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.264 22:38:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.264 3 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.264 4 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.264 5 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.264 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.523 6 00:06:46.523 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.523 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.524 7 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.524 8 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.524 9 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.524 10 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:46.524 22:38:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.902 22:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.902 22:38:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:47.902 22:38:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:47.902 22:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.902 22:38:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:48.835 22:38:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.835 00:06:48.835 real 0m2.612s 00:06:48.835 user 0m0.014s 00:06:48.835 sys 0m0.004s 00:06:48.835 ************************************ 00:06:48.835 END TEST scheduler_create_thread 00:06:48.835 ************************************ 00:06:48.835 22:38:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.835 22:38:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.092 22:38:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:49.092 22:38:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70302 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70302 ']' 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70302 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70302 00:06:49.092 killing process with pid 70302 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70302' 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70302 00:06:49.092 22:38:03 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70302 00:06:49.350 [2024-12-07 22:38:04.077015] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:49.608 00:06:49.608 real 0m3.804s 00:06:49.608 user 0m5.655s 00:06:49.608 sys 0m0.321s 00:06:49.608 22:38:04 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.608 ************************************ 00:06:49.608 END TEST event_scheduler 00:06:49.608 ************************************ 00:06:49.608 22:38:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:49.608 22:38:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:49.608 22:38:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:49.608 22:38:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.608 22:38:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.608 22:38:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.608 ************************************ 00:06:49.608 START TEST app_repeat 00:06:49.608 ************************************ 00:06:49.608 22:38:04 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:49.608 Process app_repeat pid: 70389 00:06:49.608 spdk_app_start Round 0 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70389 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:49.608 22:38:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:49.609 22:38:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70389' 00:06:49.609 22:38:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.609 22:38:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:49.609 22:38:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70389 /var/tmp/spdk-nbd.sock 00:06:49.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.609 22:38:04 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70389 ']' 00:06:49.609 22:38:04 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.609 22:38:04 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.609 22:38:04 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.609 22:38:04 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.609 22:38:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.609 [2024-12-07 22:38:04.303530] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:06:49.609 [2024-12-07 22:38:04.303798] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70389 ] 00:06:49.866 [2024-12-07 22:38:04.435583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.866 [2024-12-07 22:38:04.469598] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.866 [2024-12-07 22:38:04.469607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.866 [2024-12-07 22:38:04.499125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:49.866 22:38:04 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.866 22:38:04 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:49.866 22:38:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.123 Malloc0 00:06:50.123 22:38:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.381 Malloc1 00:06:50.638 22:38:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.638 22:38:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.895 /dev/nbd0 00:06:50.895 22:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.895 22:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.895 1+0 records in 00:06:50.895 1+0 records out 00:06:50.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028381 s, 14.4 MB/s 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.895 22:38:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:50.895 22:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.895 22:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.895 22:38:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.152 /dev/nbd1 00:06:51.152 22:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.152 22:38:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.152 1+0 records in 00:06:51.152 1+0 records out 00:06:51.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326991 s, 12.5 MB/s 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:51.152 22:38:05 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:51.152 22:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.152 22:38:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.152 22:38:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.152 22:38:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.152 22:38:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.410 { 00:06:51.410 "nbd_device": "/dev/nbd0", 00:06:51.410 "bdev_name": "Malloc0" 00:06:51.410 }, 00:06:51.410 { 00:06:51.410 "nbd_device": "/dev/nbd1", 00:06:51.410 "bdev_name": "Malloc1" 00:06:51.410 } 00:06:51.410 ]' 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.410 { 00:06:51.410 "nbd_device": "/dev/nbd0", 00:06:51.410 "bdev_name": "Malloc0" 00:06:51.410 }, 00:06:51.410 { 00:06:51.410 "nbd_device": "/dev/nbd1", 00:06:51.410 "bdev_name": "Malloc1" 00:06:51.410 } 00:06:51.410 ]' 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.410 /dev/nbd1' 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.410 /dev/nbd1' 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.410 256+0 records in 00:06:51.410 256+0 records out 00:06:51.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105052 s, 99.8 MB/s 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.410 22:38:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.668 256+0 records in 00:06:51.668 256+0 records out 00:06:51.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222726 s, 47.1 MB/s 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.668 256+0 records in 00:06:51.668 256+0 records out 00:06:51.668 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239547 s, 43.8 MB/s 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.668 22:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.927 22:38:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.184 22:38:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.442 22:38:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.442 22:38:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.702 22:38:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.961 [2024-12-07 22:38:07.496006] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.961 [2024-12-07 22:38:07.527898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.961 [2024-12-07 22:38:07.527904] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.961 [2024-12-07 22:38:07.556396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:52.961 [2024-12-07 22:38:07.556505] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.961 [2024-12-07 22:38:07.556519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.248 22:38:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.248 spdk_app_start Round 1 00:06:56.248 22:38:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:56.248 22:38:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70389 /var/tmp/spdk-nbd.sock 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70389 ']' 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.248 22:38:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:56.248 22:38:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.248 Malloc0 00:06:56.248 22:38:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.508 Malloc1 00:06:56.508 22:38:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.508 22:38:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:57.075 /dev/nbd0 00:06:57.075 22:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:57.075 22:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.075 1+0 records in 00:06:57.075 1+0 records out 00:06:57.075 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315415 s, 13.0 MB/s 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.075 22:38:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:57.075 22:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.075 22:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.075 22:38:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:57.335 /dev/nbd1 00:06:57.335 22:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:57.335 22:38:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:57.335 1+0 records in 00:06:57.335 1+0 records out 00:06:57.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00271034 s, 1.5 MB/s 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:57.335 22:38:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:57.335 22:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:57.335 22:38:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:57.335 22:38:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.335 22:38:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.335 22:38:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.594 { 00:06:57.594 "nbd_device": "/dev/nbd0", 00:06:57.594 "bdev_name": "Malloc0" 00:06:57.594 }, 00:06:57.594 { 00:06:57.594 "nbd_device": "/dev/nbd1", 00:06:57.594 "bdev_name": "Malloc1" 00:06:57.594 } 00:06:57.594 ]' 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.594 { 00:06:57.594 "nbd_device": "/dev/nbd0", 00:06:57.594 "bdev_name": "Malloc0" 00:06:57.594 }, 00:06:57.594 { 00:06:57.594 "nbd_device": "/dev/nbd1", 00:06:57.594 "bdev_name": "Malloc1" 00:06:57.594 } 00:06:57.594 ]' 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.594 /dev/nbd1' 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.594 /dev/nbd1' 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.594 256+0 records in 00:06:57.594 256+0 records out 00:06:57.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108119 s, 97.0 MB/s 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.594 256+0 records in 00:06:57.594 256+0 records out 00:06:57.594 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236246 s, 44.4 MB/s 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.594 22:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.853 256+0 records in 00:06:57.853 256+0 records out 00:06:57.853 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0285506 s, 36.7 MB/s 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.853 22:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.111 22:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.370 22:38:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.371 22:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:58.629 22:38:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:58.629 22:38:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.196 22:38:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.196 [2024-12-07 22:38:13.774388] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.196 [2024-12-07 22:38:13.805644] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.196 [2024-12-07 22:38:13.805654] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.196 [2024-12-07 22:38:13.834557] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:59.196 [2024-12-07 22:38:13.834648] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.196 [2024-12-07 22:38:13.834661] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:02.478 spdk_app_start Round 2 00:07:02.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.478 22:38:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:02.478 22:38:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:02.478 22:38:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70389 /var/tmp/spdk-nbd.sock 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70389 ']' 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.478 22:38:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:02.478 22:38:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.478 Malloc0 00:07:02.478 22:38:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:02.736 Malloc1 00:07:02.736 22:38:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:02.736 22:38:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:02.995 /dev/nbd0 00:07:03.255 22:38:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:03.255 22:38:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.255 1+0 records in 00:07:03.255 1+0 records out 00:07:03.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159393 s, 25.7 MB/s 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.255 22:38:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:03.255 22:38:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.255 22:38:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.255 22:38:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:03.514 /dev/nbd1 00:07:03.514 22:38:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:03.514 22:38:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:03.514 1+0 records in 00:07:03.514 1+0 records out 00:07:03.514 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275672 s, 14.9 MB/s 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:03.514 22:38:18 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:03.514 22:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:03.514 22:38:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.514 22:38:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:03.514 22:38:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.514 22:38:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.772 22:38:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:03.772 { 00:07:03.772 "nbd_device": "/dev/nbd0", 00:07:03.772 "bdev_name": "Malloc0" 00:07:03.772 }, 00:07:03.772 { 00:07:03.772 "nbd_device": "/dev/nbd1", 00:07:03.772 "bdev_name": "Malloc1" 00:07:03.772 } 00:07:03.772 ]' 00:07:03.772 22:38:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:03.772 { 00:07:03.772 "nbd_device": "/dev/nbd0", 00:07:03.772 "bdev_name": "Malloc0" 00:07:03.772 }, 00:07:03.772 { 00:07:03.772 "nbd_device": "/dev/nbd1", 00:07:03.772 "bdev_name": "Malloc1" 00:07:03.772 } 00:07:03.772 ]' 00:07:03.772 22:38:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.772 22:38:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:03.772 /dev/nbd1' 00:07:03.772 22:38:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:03.772 /dev/nbd1' 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:03.773 256+0 records in 00:07:03.773 256+0 records out 00:07:03.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00980595 s, 107 MB/s 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:03.773 256+0 records in 00:07:03.773 256+0 records out 00:07:03.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241308 s, 43.5 MB/s 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:03.773 256+0 records in 00:07:03.773 256+0 records out 00:07:03.773 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250373 s, 41.9 MB/s 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:03.773 22:38:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.031 22:38:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.290 22:38:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:04.857 22:38:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:04.857 22:38:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.116 22:38:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:05.116 [2024-12-07 22:38:19.805548] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.116 [2024-12-07 22:38:19.837091] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.116 [2024-12-07 22:38:19.837101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.116 [2024-12-07 22:38:19.864875] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:05.116 [2024-12-07 22:38:19.865006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.116 [2024-12-07 22:38:19.865020] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.404 22:38:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70389 /var/tmp/spdk-nbd.sock 00:07:08.404 22:38:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70389 ']' 00:07:08.404 22:38:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.404 22:38:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.404 22:38:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.404 22:38:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.404 22:38:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:08.404 22:38:23 event.app_repeat -- event/event.sh@39 -- # killprocess 70389 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70389 ']' 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70389 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70389 00:07:08.404 killing process with pid 70389 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70389' 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70389 00:07:08.404 22:38:23 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70389 00:07:08.663 spdk_app_start is called in Round 0. 00:07:08.663 Shutdown signal received, stop current app iteration 00:07:08.663 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:08.663 spdk_app_start is called in Round 1. 00:07:08.663 Shutdown signal received, stop current app iteration 00:07:08.663 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:08.663 spdk_app_start is called in Round 2. 00:07:08.663 Shutdown signal received, stop current app iteration 00:07:08.663 Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 reinitialization... 00:07:08.663 spdk_app_start is called in Round 3. 00:07:08.663 Shutdown signal received, stop current app iteration 00:07:08.663 22:38:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:08.663 22:38:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:08.663 00:07:08.663 real 0m18.909s 00:07:08.663 user 0m43.671s 00:07:08.663 sys 0m2.529s 00:07:08.663 22:38:23 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.663 22:38:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:08.663 ************************************ 00:07:08.663 END TEST app_repeat 00:07:08.663 ************************************ 00:07:08.663 22:38:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:08.663 22:38:23 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:08.663 22:38:23 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.663 22:38:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.663 22:38:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.663 ************************************ 00:07:08.663 START TEST cpu_locks 00:07:08.663 ************************************ 00:07:08.663 22:38:23 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:08.663 * Looking for test storage... 00:07:08.663 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:08.663 22:38:23 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:08.663 22:38:23 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:08.663 22:38:23 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:08.663 22:38:23 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.663 22:38:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:08.922 22:38:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:08.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.922 --rc genhtml_branch_coverage=1 00:07:08.922 --rc genhtml_function_coverage=1 00:07:08.922 --rc genhtml_legend=1 00:07:08.922 --rc geninfo_all_blocks=1 00:07:08.922 --rc geninfo_unexecuted_blocks=1 00:07:08.922 00:07:08.922 ' 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:08.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.922 --rc genhtml_branch_coverage=1 00:07:08.922 --rc genhtml_function_coverage=1 00:07:08.922 --rc genhtml_legend=1 00:07:08.922 --rc geninfo_all_blocks=1 00:07:08.922 --rc geninfo_unexecuted_blocks=1 00:07:08.922 00:07:08.922 ' 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:08.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.922 --rc genhtml_branch_coverage=1 00:07:08.922 --rc genhtml_function_coverage=1 00:07:08.922 --rc genhtml_legend=1 00:07:08.922 --rc geninfo_all_blocks=1 00:07:08.922 --rc geninfo_unexecuted_blocks=1 00:07:08.922 00:07:08.922 ' 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:08.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.922 --rc genhtml_branch_coverage=1 00:07:08.922 --rc genhtml_function_coverage=1 00:07:08.922 --rc genhtml_legend=1 00:07:08.922 --rc geninfo_all_blocks=1 00:07:08.922 --rc geninfo_unexecuted_blocks=1 00:07:08.922 00:07:08.922 ' 00:07:08.922 22:38:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:08.922 22:38:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:08.922 22:38:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:08.922 22:38:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.922 22:38:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.922 ************************************ 00:07:08.922 START TEST default_locks 00:07:08.922 ************************************ 00:07:08.922 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:08.922 22:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70835 00:07:08.922 22:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70835 00:07:08.922 22:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.923 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70835 ']' 00:07:08.923 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.923 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.923 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.923 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.923 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:08.923 [2024-12-07 22:38:23.519313] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:08.923 [2024-12-07 22:38:23.519656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70835 ] 00:07:08.923 [2024-12-07 22:38:23.656282] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.195 [2024-12-07 22:38:23.692143] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.195 [2024-12-07 22:38:23.727514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.195 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.195 22:38:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:09.195 22:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70835 00:07:09.195 22:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70835 00:07:09.195 22:38:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.775 22:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70835 00:07:09.775 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70835 ']' 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70835 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70835 00:07:09.776 killing process with pid 70835 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70835' 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70835 00:07:09.776 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70835 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70835 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70835 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:10.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70835 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70835 ']' 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.034 ERROR: process (pid: 70835) is no longer running 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.034 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70835) - No such process 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:10.034 ************************************ 00:07:10.034 END TEST default_locks 00:07:10.034 ************************************ 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.034 00:07:10.034 real 0m1.156s 00:07:10.034 user 0m1.241s 00:07:10.034 sys 0m0.468s 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.034 22:38:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.034 22:38:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:10.034 22:38:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.034 22:38:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.034 22:38:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.034 ************************************ 00:07:10.034 START TEST default_locks_via_rpc 00:07:10.034 ************************************ 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70874 00:07:10.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70874 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70874 ']' 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.034 22:38:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.035 [2024-12-07 22:38:24.710523] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:10.035 [2024-12-07 22:38:24.710600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70874 ] 00:07:10.293 [2024-12-07 22:38:24.839215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.293 [2024-12-07 22:38:24.873835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.293 [2024-12-07 22:38:24.909061] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70874 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70874 00:07:10.293 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70874 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70874 ']' 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70874 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70874 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.911 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.911 killing process with pid 70874 00:07:10.912 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70874' 00:07:10.912 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70874 00:07:10.912 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70874 00:07:11.169 00:07:11.169 real 0m1.088s 00:07:11.169 user 0m1.167s 00:07:11.169 sys 0m0.422s 00:07:11.169 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.169 22:38:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.169 ************************************ 00:07:11.169 END TEST default_locks_via_rpc 00:07:11.169 ************************************ 00:07:11.169 22:38:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:11.169 22:38:25 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.169 22:38:25 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.169 22:38:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.169 ************************************ 00:07:11.169 START TEST non_locking_app_on_locked_coremask 00:07:11.169 ************************************ 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70912 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70912 /var/tmp/spdk.sock 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70912 ']' 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.169 22:38:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.169 [2024-12-07 22:38:25.874116] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:11.169 [2024-12-07 22:38:25.874242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70912 ] 00:07:11.426 [2024-12-07 22:38:26.011912] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.426 [2024-12-07 22:38:26.046632] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.426 [2024-12-07 22:38:26.081670] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70915 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70915 /var/tmp/spdk2.sock 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70915 ']' 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:11.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:11.684 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.684 [2024-12-07 22:38:26.241028] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:11.684 [2024-12-07 22:38:26.241144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70915 ] 00:07:11.684 [2024-12-07 22:38:26.373703] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.684 [2024-12-07 22:38:26.373834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.943 [2024-12-07 22:38:26.459029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.943 [2024-12-07 22:38:26.533203] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.201 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.201 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:12.201 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70912 00:07:12.201 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70912 00:07:12.201 22:38:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70912 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70912 ']' 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70912 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70912 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.136 killing process with pid 70912 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70912' 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70912 00:07:13.136 22:38:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70912 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70915 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70915 ']' 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70915 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70915 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70915' 00:07:13.705 killing process with pid 70915 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70915 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70915 00:07:13.705 00:07:13.705 real 0m2.623s 00:07:13.705 user 0m2.934s 00:07:13.705 sys 0m0.899s 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.705 22:38:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.705 ************************************ 00:07:13.705 END TEST non_locking_app_on_locked_coremask 00:07:13.705 ************************************ 00:07:13.966 22:38:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:13.966 22:38:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.966 22:38:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.966 22:38:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.966 ************************************ 00:07:13.966 START TEST locking_app_on_unlocked_coremask 00:07:13.966 ************************************ 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70976 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70976 /var/tmp/spdk.sock 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70976 ']' 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.966 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.966 [2024-12-07 22:38:28.549800] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:13.966 [2024-12-07 22:38:28.549989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70976 ] 00:07:13.966 [2024-12-07 22:38:28.690364] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.966 [2024-12-07 22:38:28.690445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.966 [2024-12-07 22:38:28.724948] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.226 [2024-12-07 22:38:28.761706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70984 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70984 /var/tmp/spdk2.sock 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70984 ']' 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:14.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:14.226 22:38:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.226 [2024-12-07 22:38:28.946725] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:14.226 [2024-12-07 22:38:28.946839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70984 ] 00:07:14.485 [2024-12-07 22:38:29.085735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.485 [2024-12-07 22:38:29.157029] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.485 [2024-12-07 22:38:29.222528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.421 22:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.421 22:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:15.421 22:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70984 00:07:15.421 22:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70984 00:07:15.421 22:38:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70976 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70976 ']' 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70976 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70976 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.357 killing process with pid 70976 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70976' 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70976 00:07:16.357 22:38:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70976 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70984 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70984 ']' 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70984 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70984 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:16.924 killing process with pid 70984 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70984' 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70984 00:07:16.924 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70984 00:07:17.183 00:07:17.183 real 0m3.227s 00:07:17.183 user 0m3.808s 00:07:17.183 sys 0m0.960s 00:07:17.183 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:17.183 22:38:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.183 ************************************ 00:07:17.183 END TEST locking_app_on_unlocked_coremask 00:07:17.183 ************************************ 00:07:17.183 22:38:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:17.183 22:38:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:17.183 22:38:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:17.183 22:38:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:17.183 ************************************ 00:07:17.183 START TEST locking_app_on_locked_coremask 00:07:17.183 ************************************ 00:07:17.183 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:17.183 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71051 00:07:17.183 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.184 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71051 /var/tmp/spdk.sock 00:07:17.184 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71051 ']' 00:07:17.184 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.184 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.184 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.184 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.184 22:38:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.184 [2024-12-07 22:38:31.823129] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:17.184 [2024-12-07 22:38:31.823250] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71051 ] 00:07:17.443 [2024-12-07 22:38:31.961834] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.443 [2024-12-07 22:38:31.995779] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.443 [2024-12-07 22:38:32.031033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71054 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71054 /var/tmp/spdk2.sock 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71054 /var/tmp/spdk2.sock 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71054 /var/tmp/spdk2.sock 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71054 ']' 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:17.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:17.443 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.703 [2024-12-07 22:38:32.210718] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:17.703 [2024-12-07 22:38:32.210854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71054 ] 00:07:17.703 [2024-12-07 22:38:32.349735] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71051 has claimed it. 00:07:17.703 [2024-12-07 22:38:32.349860] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:18.269 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71054) - No such process 00:07:18.269 ERROR: process (pid: 71054) is no longer running 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71051 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71051 00:07:18.269 22:38:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71051 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71051 ']' 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71051 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71051 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.836 killing process with pid 71051 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71051' 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71051 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71051 00:07:18.836 00:07:18.836 real 0m1.835s 00:07:18.836 user 0m2.164s 00:07:18.836 sys 0m0.528s 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.836 ************************************ 00:07:18.836 END TEST locking_app_on_locked_coremask 00:07:18.836 ************************************ 00:07:18.836 22:38:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.095 22:38:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:19.095 22:38:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.095 22:38:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.095 22:38:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.095 ************************************ 00:07:19.095 START TEST locking_overlapped_coremask 00:07:19.095 ************************************ 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71100 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71100 /var/tmp/spdk.sock 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71100 ']' 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.095 22:38:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.095 [2024-12-07 22:38:33.710129] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:19.095 [2024-12-07 22:38:33.710227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71100 ] 00:07:19.095 [2024-12-07 22:38:33.842648] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.354 [2024-12-07 22:38:33.878844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.354 [2024-12-07 22:38:33.878994] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.354 [2024-12-07 22:38:33.879014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.354 [2024-12-07 22:38:33.917619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71110 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71110 /var/tmp/spdk2.sock 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71110 /var/tmp/spdk2.sock 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:19.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71110 /var/tmp/spdk2.sock 00:07:19.354 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71110 ']' 00:07:19.355 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.355 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.355 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.355 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.355 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.613 [2024-12-07 22:38:34.131896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:19.613 [2024-12-07 22:38:34.132035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71110 ] 00:07:19.613 [2024-12-07 22:38:34.285083] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71100 has claimed it. 00:07:19.613 [2024-12-07 22:38:34.285175] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:20.181 ERROR: process (pid: 71110) is no longer running 00:07:20.181 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71110) - No such process 00:07:20.181 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.181 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:20.181 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:20.181 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.181 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71100 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71100 ']' 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71100 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71100 00:07:20.182 killing process with pid 71100 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71100' 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71100 00:07:20.182 22:38:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71100 00:07:20.441 00:07:20.441 real 0m1.476s 00:07:20.441 user 0m4.115s 00:07:20.441 sys 0m0.335s 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.441 ************************************ 00:07:20.441 END TEST locking_overlapped_coremask 00:07:20.441 ************************************ 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:20.441 22:38:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:20.441 22:38:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.441 22:38:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.441 22:38:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.441 ************************************ 00:07:20.441 START TEST locking_overlapped_coremask_via_rpc 00:07:20.441 ************************************ 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71150 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71150 /var/tmp/spdk.sock 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71150 ']' 00:07:20.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.441 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.699 [2024-12-07 22:38:35.223054] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.699 [2024-12-07 22:38:35.223145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71150 ] 00:07:20.699 [2024-12-07 22:38:35.353208] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:20.699 [2024-12-07 22:38:35.353247] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:20.699 [2024-12-07 22:38:35.387576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.699 [2024-12-07 22:38:35.387689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.699 [2024-12-07 22:38:35.387694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.699 [2024-12-07 22:38:35.422878] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71155 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71155 /var/tmp/spdk2.sock 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71155 ']' 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.957 22:38:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.957 [2024-12-07 22:38:35.614122] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:20.957 [2024-12-07 22:38:35.614235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71155 ] 00:07:21.214 [2024-12-07 22:38:35.762038] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.214 [2024-12-07 22:38:35.762119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:21.214 [2024-12-07 22:38:35.841454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.214 [2024-12-07 22:38:35.841576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.214 [2024-12-07 22:38:35.841576] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:21.214 [2024-12-07 22:38:35.919330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.157 [2024-12-07 22:38:36.595027] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71150 has claimed it. 00:07:22.157 request: 00:07:22.157 { 00:07:22.157 "method": "framework_enable_cpumask_locks", 00:07:22.157 "req_id": 1 00:07:22.157 } 00:07:22.157 Got JSON-RPC error response 00:07:22.157 response: 00:07:22.157 { 00:07:22.157 "code": -32603, 00:07:22.157 "message": "Failed to claim CPU core: 2" 00:07:22.157 } 00:07:22.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71150 /var/tmp/spdk.sock 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71150 ']' 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71155 /var/tmp/spdk2.sock 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71155 ']' 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.157 22:38:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:22.416 00:07:22.416 real 0m1.970s 00:07:22.416 user 0m1.155s 00:07:22.416 sys 0m0.155s 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:22.416 22:38:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:22.416 ************************************ 00:07:22.416 END TEST locking_overlapped_coremask_via_rpc 00:07:22.416 ************************************ 00:07:22.674 22:38:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:22.674 22:38:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71150 ]] 00:07:22.674 22:38:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71150 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71150 ']' 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71150 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71150 00:07:22.674 killing process with pid 71150 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71150' 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71150 00:07:22.674 22:38:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71150 00:07:22.933 22:38:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71155 ]] 00:07:22.933 22:38:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71155 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71155 ']' 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71155 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71155 00:07:22.933 killing process with pid 71155 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71155' 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71155 00:07:22.933 22:38:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71155 00:07:23.192 22:38:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.192 22:38:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:23.192 22:38:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71150 ]] 00:07:23.192 22:38:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71150 00:07:23.192 Process with pid 71150 is not found 00:07:23.192 Process with pid 71155 is not found 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71150 ']' 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71150 00:07:23.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71150) - No such process 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71150 is not found' 00:07:23.192 22:38:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71155 ]] 00:07:23.192 22:38:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71155 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71155 ']' 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71155 00:07:23.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71155) - No such process 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71155 is not found' 00:07:23.192 22:38:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:23.192 00:07:23.192 real 0m14.526s 00:07:23.192 user 0m26.723s 00:07:23.192 sys 0m4.497s 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.192 22:38:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.192 ************************************ 00:07:23.192 END TEST cpu_locks 00:07:23.192 ************************************ 00:07:23.192 00:07:23.192 real 0m41.454s 00:07:23.192 user 1m22.525s 00:07:23.192 sys 0m7.727s 00:07:23.192 ************************************ 00:07:23.192 END TEST event 00:07:23.192 ************************************ 00:07:23.192 22:38:37 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:23.192 22:38:37 event -- common/autotest_common.sh@10 -- # set +x 00:07:23.192 22:38:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.192 22:38:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:23.192 22:38:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.192 22:38:37 -- common/autotest_common.sh@10 -- # set +x 00:07:23.192 ************************************ 00:07:23.192 START TEST thread 00:07:23.192 ************************************ 00:07:23.192 22:38:37 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:23.192 * Looking for test storage... 00:07:23.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:23.192 22:38:37 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:23.192 22:38:37 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:23.192 22:38:37 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:23.452 22:38:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.452 22:38:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.452 22:38:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.452 22:38:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.452 22:38:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.452 22:38:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.452 22:38:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.452 22:38:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.452 22:38:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.452 22:38:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.452 22:38:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.452 22:38:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:23.452 22:38:38 thread -- scripts/common.sh@345 -- # : 1 00:07:23.452 22:38:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.452 22:38:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.452 22:38:38 thread -- scripts/common.sh@365 -- # decimal 1 00:07:23.452 22:38:38 thread -- scripts/common.sh@353 -- # local d=1 00:07:23.452 22:38:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.452 22:38:38 thread -- scripts/common.sh@355 -- # echo 1 00:07:23.452 22:38:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.452 22:38:38 thread -- scripts/common.sh@366 -- # decimal 2 00:07:23.452 22:38:38 thread -- scripts/common.sh@353 -- # local d=2 00:07:23.452 22:38:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.452 22:38:38 thread -- scripts/common.sh@355 -- # echo 2 00:07:23.452 22:38:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.452 22:38:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.452 22:38:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.452 22:38:38 thread -- scripts/common.sh@368 -- # return 0 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.452 --rc genhtml_branch_coverage=1 00:07:23.452 --rc genhtml_function_coverage=1 00:07:23.452 --rc genhtml_legend=1 00:07:23.452 --rc geninfo_all_blocks=1 00:07:23.452 --rc geninfo_unexecuted_blocks=1 00:07:23.452 00:07:23.452 ' 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.452 --rc genhtml_branch_coverage=1 00:07:23.452 --rc genhtml_function_coverage=1 00:07:23.452 --rc genhtml_legend=1 00:07:23.452 --rc geninfo_all_blocks=1 00:07:23.452 --rc geninfo_unexecuted_blocks=1 00:07:23.452 00:07:23.452 ' 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.452 --rc genhtml_branch_coverage=1 00:07:23.452 --rc genhtml_function_coverage=1 00:07:23.452 --rc genhtml_legend=1 00:07:23.452 --rc geninfo_all_blocks=1 00:07:23.452 --rc geninfo_unexecuted_blocks=1 00:07:23.452 00:07:23.452 ' 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:23.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.452 --rc genhtml_branch_coverage=1 00:07:23.452 --rc genhtml_function_coverage=1 00:07:23.452 --rc genhtml_legend=1 00:07:23.452 --rc geninfo_all_blocks=1 00:07:23.452 --rc geninfo_unexecuted_blocks=1 00:07:23.452 00:07:23.452 ' 00:07:23.452 22:38:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:23.452 22:38:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.452 ************************************ 00:07:23.452 START TEST thread_poller_perf 00:07:23.452 ************************************ 00:07:23.452 22:38:38 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:23.452 [2024-12-07 22:38:38.065352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:23.452 [2024-12-07 22:38:38.065583] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71293 ] 00:07:23.452 [2024-12-07 22:38:38.204474] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.712 [2024-12-07 22:38:38.245882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.712 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:24.648 [2024-12-07T22:38:39.414Z] ====================================== 00:07:24.648 [2024-12-07T22:38:39.414Z] busy:2212161171 (cyc) 00:07:24.648 [2024-12-07T22:38:39.414Z] total_run_count: 315000 00:07:24.648 [2024-12-07T22:38:39.414Z] tsc_hz: 2200000000 (cyc) 00:07:24.648 [2024-12-07T22:38:39.414Z] ====================================== 00:07:24.648 [2024-12-07T22:38:39.414Z] poller_cost: 7022 (cyc), 3191 (nsec) 00:07:24.648 00:07:24.648 real 0m1.258s 00:07:24.648 user 0m1.115s 00:07:24.648 sys 0m0.037s 00:07:24.648 22:38:39 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.648 22:38:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:24.648 ************************************ 00:07:24.648 END TEST thread_poller_perf 00:07:24.648 ************************************ 00:07:24.648 22:38:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.648 22:38:39 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:24.648 22:38:39 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.648 22:38:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:24.648 ************************************ 00:07:24.648 START TEST thread_poller_perf 00:07:24.648 ************************************ 00:07:24.648 22:38:39 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:24.648 [2024-12-07 22:38:39.379344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:24.648 [2024-12-07 22:38:39.379434] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71323 ] 00:07:24.907 [2024-12-07 22:38:39.516407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.907 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:24.907 [2024-12-07 22:38:39.559086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.285 [2024-12-07T22:38:41.051Z] ====================================== 00:07:26.285 [2024-12-07T22:38:41.051Z] busy:2201869118 (cyc) 00:07:26.285 [2024-12-07T22:38:41.051Z] total_run_count: 4372000 00:07:26.285 [2024-12-07T22:38:41.051Z] tsc_hz: 2200000000 (cyc) 00:07:26.285 [2024-12-07T22:38:41.051Z] ====================================== 00:07:26.285 [2024-12-07T22:38:41.051Z] poller_cost: 503 (cyc), 228 (nsec) 00:07:26.285 ************************************ 00:07:26.285 END TEST thread_poller_perf 00:07:26.285 ************************************ 00:07:26.285 00:07:26.285 real 0m1.255s 00:07:26.285 user 0m1.103s 00:07:26.285 sys 0m0.046s 00:07:26.285 22:38:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.285 22:38:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.285 22:38:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:26.285 ************************************ 00:07:26.285 END TEST thread 00:07:26.285 ************************************ 00:07:26.285 00:07:26.285 real 0m2.799s 00:07:26.285 user 0m2.359s 00:07:26.285 sys 0m0.219s 00:07:26.285 22:38:40 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:26.285 22:38:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:26.285 22:38:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:26.285 22:38:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.285 22:38:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:26.285 22:38:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:26.285 22:38:40 -- common/autotest_common.sh@10 -- # set +x 00:07:26.285 ************************************ 00:07:26.285 START TEST app_cmdline 00:07:26.285 ************************************ 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:26.285 * Looking for test storage... 00:07:26.285 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.285 22:38:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.285 --rc genhtml_branch_coverage=1 00:07:26.285 --rc genhtml_function_coverage=1 00:07:26.285 --rc genhtml_legend=1 00:07:26.285 --rc geninfo_all_blocks=1 00:07:26.285 --rc geninfo_unexecuted_blocks=1 00:07:26.285 00:07:26.285 ' 00:07:26.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.285 --rc genhtml_branch_coverage=1 00:07:26.285 --rc genhtml_function_coverage=1 00:07:26.285 --rc genhtml_legend=1 00:07:26.285 --rc geninfo_all_blocks=1 00:07:26.285 --rc geninfo_unexecuted_blocks=1 00:07:26.285 00:07:26.285 ' 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.285 --rc genhtml_branch_coverage=1 00:07:26.285 --rc genhtml_function_coverage=1 00:07:26.285 --rc genhtml_legend=1 00:07:26.285 --rc geninfo_all_blocks=1 00:07:26.285 --rc geninfo_unexecuted_blocks=1 00:07:26.285 00:07:26.285 ' 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:26.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.285 --rc genhtml_branch_coverage=1 00:07:26.285 --rc genhtml_function_coverage=1 00:07:26.285 --rc genhtml_legend=1 00:07:26.285 --rc geninfo_all_blocks=1 00:07:26.285 --rc geninfo_unexecuted_blocks=1 00:07:26.285 00:07:26.285 ' 00:07:26.285 22:38:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:26.285 22:38:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71406 00:07:26.285 22:38:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71406 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71406 ']' 00:07:26.285 22:38:40 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:26.285 22:38:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:26.285 [2024-12-07 22:38:40.966319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:26.285 [2024-12-07 22:38:40.966621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71406 ] 00:07:26.544 [2024-12-07 22:38:41.103840] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.545 [2024-12-07 22:38:41.146453] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.545 [2024-12-07 22:38:41.183372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:26.545 22:38:41 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.545 22:38:41 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:26.545 22:38:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:27.112 { 00:07:27.112 "version": "SPDK v24.09.1-pre git sha1 b18e1bd62", 00:07:27.112 "fields": { 00:07:27.112 "major": 24, 00:07:27.112 "minor": 9, 00:07:27.112 "patch": 1, 00:07:27.112 "suffix": "-pre", 00:07:27.112 "commit": "b18e1bd62" 00:07:27.112 } 00:07:27.112 } 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:27.112 22:38:41 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.112 22:38:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:27.112 22:38:41 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:27.112 22:38:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:27.113 22:38:41 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:27.372 request: 00:07:27.372 { 00:07:27.372 "method": "env_dpdk_get_mem_stats", 00:07:27.372 "req_id": 1 00:07:27.372 } 00:07:27.372 Got JSON-RPC error response 00:07:27.372 response: 00:07:27.372 { 00:07:27.372 "code": -32601, 00:07:27.372 "message": "Method not found" 00:07:27.372 } 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.373 22:38:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71406 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71406 ']' 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71406 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71406 00:07:27.373 killing process with pid 71406 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71406' 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@969 -- # kill 71406 00:07:27.373 22:38:41 app_cmdline -- common/autotest_common.sh@974 -- # wait 71406 00:07:27.632 ************************************ 00:07:27.632 END TEST app_cmdline 00:07:27.632 ************************************ 00:07:27.632 00:07:27.632 real 0m1.502s 00:07:27.632 user 0m1.984s 00:07:27.632 sys 0m0.386s 00:07:27.632 22:38:42 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.632 22:38:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 22:38:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.632 22:38:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.632 22:38:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.632 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.632 ************************************ 00:07:27.632 START TEST version 00:07:27.632 ************************************ 00:07:27.632 22:38:42 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:27.632 * Looking for test storage... 00:07:27.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:27.632 22:38:42 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:27.632 22:38:42 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:27.632 22:38:42 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:27.892 22:38:42 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:27.892 22:38:42 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.892 22:38:42 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.892 22:38:42 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.892 22:38:42 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.892 22:38:42 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.892 22:38:42 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.892 22:38:42 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.892 22:38:42 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.892 22:38:42 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.892 22:38:42 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.892 22:38:42 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.892 22:38:42 version -- scripts/common.sh@344 -- # case "$op" in 00:07:27.892 22:38:42 version -- scripts/common.sh@345 -- # : 1 00:07:27.892 22:38:42 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.892 22:38:42 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.892 22:38:42 version -- scripts/common.sh@365 -- # decimal 1 00:07:27.892 22:38:42 version -- scripts/common.sh@353 -- # local d=1 00:07:27.892 22:38:42 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.892 22:38:42 version -- scripts/common.sh@355 -- # echo 1 00:07:27.892 22:38:42 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.892 22:38:42 version -- scripts/common.sh@366 -- # decimal 2 00:07:27.892 22:38:42 version -- scripts/common.sh@353 -- # local d=2 00:07:27.892 22:38:42 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.892 22:38:42 version -- scripts/common.sh@355 -- # echo 2 00:07:27.892 22:38:42 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.892 22:38:42 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.892 22:38:42 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.892 22:38:42 version -- scripts/common.sh@368 -- # return 0 00:07:27.892 22:38:42 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.892 22:38:42 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:27.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.892 --rc genhtml_branch_coverage=1 00:07:27.892 --rc genhtml_function_coverage=1 00:07:27.892 --rc genhtml_legend=1 00:07:27.892 --rc geninfo_all_blocks=1 00:07:27.892 --rc geninfo_unexecuted_blocks=1 00:07:27.892 00:07:27.892 ' 00:07:27.892 22:38:42 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:27.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.892 --rc genhtml_branch_coverage=1 00:07:27.892 --rc genhtml_function_coverage=1 00:07:27.892 --rc genhtml_legend=1 00:07:27.892 --rc geninfo_all_blocks=1 00:07:27.892 --rc geninfo_unexecuted_blocks=1 00:07:27.892 00:07:27.892 ' 00:07:27.892 22:38:42 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:27.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.892 --rc genhtml_branch_coverage=1 00:07:27.892 --rc genhtml_function_coverage=1 00:07:27.892 --rc genhtml_legend=1 00:07:27.892 --rc geninfo_all_blocks=1 00:07:27.892 --rc geninfo_unexecuted_blocks=1 00:07:27.892 00:07:27.892 ' 00:07:27.892 22:38:42 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:27.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.892 --rc genhtml_branch_coverage=1 00:07:27.892 --rc genhtml_function_coverage=1 00:07:27.892 --rc genhtml_legend=1 00:07:27.892 --rc geninfo_all_blocks=1 00:07:27.892 --rc geninfo_unexecuted_blocks=1 00:07:27.892 00:07:27.892 ' 00:07:27.892 22:38:42 version -- app/version.sh@17 -- # get_header_version major 00:07:27.892 22:38:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # cut -f2 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.892 22:38:42 version -- app/version.sh@17 -- # major=24 00:07:27.892 22:38:42 version -- app/version.sh@18 -- # get_header_version minor 00:07:27.892 22:38:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # cut -f2 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.892 22:38:42 version -- app/version.sh@18 -- # minor=9 00:07:27.892 22:38:42 version -- app/version.sh@19 -- # get_header_version patch 00:07:27.892 22:38:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # cut -f2 00:07:27.892 22:38:42 version -- app/version.sh@19 -- # patch=1 00:07:27.892 22:38:42 version -- app/version.sh@20 -- # get_header_version suffix 00:07:27.892 22:38:42 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # cut -f2 00:07:27.892 22:38:42 version -- app/version.sh@14 -- # tr -d '"' 00:07:27.892 22:38:42 version -- app/version.sh@20 -- # suffix=-pre 00:07:27.892 22:38:42 version -- app/version.sh@22 -- # version=24.9 00:07:27.892 22:38:42 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:27.892 22:38:42 version -- app/version.sh@25 -- # version=24.9.1 00:07:27.892 22:38:42 version -- app/version.sh@28 -- # version=24.9.1rc0 00:07:27.892 22:38:42 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:27.892 22:38:42 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:27.892 22:38:42 version -- app/version.sh@30 -- # py_version=24.9.1rc0 00:07:27.892 22:38:42 version -- app/version.sh@31 -- # [[ 24.9.1rc0 == \2\4\.\9\.\1\r\c\0 ]] 00:07:27.892 00:07:27.892 real 0m0.235s 00:07:27.892 user 0m0.157s 00:07:27.892 sys 0m0.115s 00:07:27.892 ************************************ 00:07:27.892 END TEST version 00:07:27.892 ************************************ 00:07:27.892 22:38:42 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.892 22:38:42 version -- common/autotest_common.sh@10 -- # set +x 00:07:27.892 22:38:42 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:27.892 22:38:42 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:27.892 22:38:42 -- spdk/autotest.sh@194 -- # uname -s 00:07:27.892 22:38:42 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:27.892 22:38:42 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:27.892 22:38:42 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:27.892 22:38:42 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:27.892 22:38:42 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:27.892 22:38:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.892 22:38:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.893 22:38:42 -- common/autotest_common.sh@10 -- # set +x 00:07:27.893 ************************************ 00:07:27.893 START TEST spdk_dd 00:07:27.893 ************************************ 00:07:27.893 22:38:42 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:27.893 * Looking for test storage... 00:07:27.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:27.893 22:38:42 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:27.893 22:38:42 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:27.893 22:38:42 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.152 22:38:42 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.152 22:38:42 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:28.152 22:38:42 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.152 22:38:42 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.152 --rc genhtml_branch_coverage=1 00:07:28.152 --rc genhtml_function_coverage=1 00:07:28.152 --rc genhtml_legend=1 00:07:28.152 --rc geninfo_all_blocks=1 00:07:28.152 --rc geninfo_unexecuted_blocks=1 00:07:28.152 00:07:28.152 ' 00:07:28.152 22:38:42 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.152 --rc genhtml_branch_coverage=1 00:07:28.152 --rc genhtml_function_coverage=1 00:07:28.152 --rc genhtml_legend=1 00:07:28.152 --rc geninfo_all_blocks=1 00:07:28.152 --rc geninfo_unexecuted_blocks=1 00:07:28.152 00:07:28.153 ' 00:07:28.153 22:38:42 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.153 --rc genhtml_branch_coverage=1 00:07:28.153 --rc genhtml_function_coverage=1 00:07:28.153 --rc genhtml_legend=1 00:07:28.153 --rc geninfo_all_blocks=1 00:07:28.153 --rc geninfo_unexecuted_blocks=1 00:07:28.153 00:07:28.153 ' 00:07:28.153 22:38:42 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.153 --rc genhtml_branch_coverage=1 00:07:28.153 --rc genhtml_function_coverage=1 00:07:28.153 --rc genhtml_legend=1 00:07:28.153 --rc geninfo_all_blocks=1 00:07:28.153 --rc geninfo_unexecuted_blocks=1 00:07:28.153 00:07:28.153 ' 00:07:28.153 22:38:42 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.153 22:38:42 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.153 22:38:42 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.153 22:38:42 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.153 22:38:42 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.153 22:38:42 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.153 22:38:42 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.153 22:38:42 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.153 22:38:42 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:28.153 22:38:42 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.153 22:38:42 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:28.412 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.412 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:28.412 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:28.412 22:38:43 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:28.412 22:38:43 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:28.412 22:38:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:28.413 22:38:43 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:28.673 22:38:43 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.673 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:28.674 * spdk_dd linked to liburing 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:28.674 22:38:43 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:28.674 22:38:43 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:28.675 22:38:43 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:28.675 22:38:43 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:28.675 22:38:43 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:28.675 22:38:43 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:28.675 22:38:43 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:28.675 22:38:43 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:28.675 22:38:43 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:28.675 22:38:43 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:28.675 22:38:43 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.675 22:38:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:28.675 ************************************ 00:07:28.675 START TEST spdk_dd_basic_rw 00:07:28.675 ************************************ 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:28.675 * Looking for test storage... 00:07:28.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.675 --rc genhtml_branch_coverage=1 00:07:28.675 --rc genhtml_function_coverage=1 00:07:28.675 --rc genhtml_legend=1 00:07:28.675 --rc geninfo_all_blocks=1 00:07:28.675 --rc geninfo_unexecuted_blocks=1 00:07:28.675 00:07:28.675 ' 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.675 --rc genhtml_branch_coverage=1 00:07:28.675 --rc genhtml_function_coverage=1 00:07:28.675 --rc genhtml_legend=1 00:07:28.675 --rc geninfo_all_blocks=1 00:07:28.675 --rc geninfo_unexecuted_blocks=1 00:07:28.675 00:07:28.675 ' 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.675 --rc genhtml_branch_coverage=1 00:07:28.675 --rc genhtml_function_coverage=1 00:07:28.675 --rc genhtml_legend=1 00:07:28.675 --rc geninfo_all_blocks=1 00:07:28.675 --rc geninfo_unexecuted_blocks=1 00:07:28.675 00:07:28.675 ' 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.675 --rc genhtml_branch_coverage=1 00:07:28.675 --rc genhtml_function_coverage=1 00:07:28.675 --rc genhtml_legend=1 00:07:28.675 --rc geninfo_all_blocks=1 00:07:28.675 --rc geninfo_unexecuted_blocks=1 00:07:28.675 00:07:28.675 ' 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.675 22:38:43 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.676 22:38:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.676 22:38:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.676 22:38:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.676 22:38:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:28.676 22:38:43 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.676 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:28.937 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:28.938 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:28.938 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.939 ************************************ 00:07:28.939 START TEST dd_bs_lt_native_bs 00:07:28.939 ************************************ 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:28.939 22:38:43 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:28.939 { 00:07:28.939 "subsystems": [ 00:07:28.939 { 00:07:28.939 "subsystem": "bdev", 00:07:28.939 "config": [ 00:07:28.939 { 00:07:28.939 "params": { 00:07:28.939 "trtype": "pcie", 00:07:28.939 "traddr": "0000:00:10.0", 00:07:28.939 "name": "Nvme0" 00:07:28.939 }, 00:07:28.939 "method": "bdev_nvme_attach_controller" 00:07:28.939 }, 00:07:28.939 { 00:07:28.939 "method": "bdev_wait_for_examine" 00:07:28.939 } 00:07:28.939 ] 00:07:28.939 } 00:07:28.939 ] 00:07:28.939 } 00:07:28.939 [2024-12-07 22:38:43.686758] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:28.939 [2024-12-07 22:38:43.686858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71744 ] 00:07:29.199 [2024-12-07 22:38:43.822759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.199 [2024-12-07 22:38:43.855602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.199 [2024-12-07 22:38:43.883412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.458 [2024-12-07 22:38:43.970251] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:29.458 [2024-12-07 22:38:43.970338] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:29.458 [2024-12-07 22:38:44.036009] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:29.458 00:07:29.458 real 0m0.464s 00:07:29.458 user 0m0.300s 00:07:29.458 sys 0m0.109s 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.458 ************************************ 00:07:29.458 END TEST dd_bs_lt_native_bs 00:07:29.458 ************************************ 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:29.458 ************************************ 00:07:29.458 START TEST dd_rw 00:07:29.458 ************************************ 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:29.458 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:29.459 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:29.459 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:29.459 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.027 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:30.027 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:30.027 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.027 22:38:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.286 [2024-12-07 22:38:44.793624] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.286 [2024-12-07 22:38:44.794094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71775 ] 00:07:30.286 { 00:07:30.286 "subsystems": [ 00:07:30.286 { 00:07:30.286 "subsystem": "bdev", 00:07:30.286 "config": [ 00:07:30.286 { 00:07:30.286 "params": { 00:07:30.286 "trtype": "pcie", 00:07:30.286 "traddr": "0000:00:10.0", 00:07:30.286 "name": "Nvme0" 00:07:30.286 }, 00:07:30.286 "method": "bdev_nvme_attach_controller" 00:07:30.286 }, 00:07:30.286 { 00:07:30.286 "method": "bdev_wait_for_examine" 00:07:30.286 } 00:07:30.286 ] 00:07:30.286 } 00:07:30.286 ] 00:07:30.286 } 00:07:30.286 [2024-12-07 22:38:44.939669] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.286 [2024-12-07 22:38:44.973713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.286 [2024-12-07 22:38:45.001028] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.544  [2024-12-07T22:38:45.310Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:30.544 00:07:30.544 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:30.544 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:30.544 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.544 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.544 [2024-12-07 22:38:45.272554] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:30.544 [2024-12-07 22:38:45.273324] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71789 ] 00:07:30.544 { 00:07:30.544 "subsystems": [ 00:07:30.544 { 00:07:30.544 "subsystem": "bdev", 00:07:30.544 "config": [ 00:07:30.544 { 00:07:30.544 "params": { 00:07:30.544 "trtype": "pcie", 00:07:30.544 "traddr": "0000:00:10.0", 00:07:30.544 "name": "Nvme0" 00:07:30.544 }, 00:07:30.544 "method": "bdev_nvme_attach_controller" 00:07:30.544 }, 00:07:30.544 { 00:07:30.544 "method": "bdev_wait_for_examine" 00:07:30.544 } 00:07:30.544 ] 00:07:30.544 } 00:07:30.544 ] 00:07:30.544 } 00:07:30.802 [2024-12-07 22:38:45.408142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.802 [2024-12-07 22:38:45.441965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.802 [2024-12-07 22:38:45.469551] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.802  [2024-12-07T22:38:45.826Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:31.060 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.060 22:38:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.060 { 00:07:31.060 "subsystems": [ 00:07:31.060 { 00:07:31.060 "subsystem": "bdev", 00:07:31.060 "config": [ 00:07:31.060 { 00:07:31.060 "params": { 00:07:31.060 "trtype": "pcie", 00:07:31.060 "traddr": "0000:00:10.0", 00:07:31.060 "name": "Nvme0" 00:07:31.060 }, 00:07:31.060 "method": "bdev_nvme_attach_controller" 00:07:31.060 }, 00:07:31.060 { 00:07:31.060 "method": "bdev_wait_for_examine" 00:07:31.060 } 00:07:31.060 ] 00:07:31.060 } 00:07:31.060 ] 00:07:31.060 } 00:07:31.060 [2024-12-07 22:38:45.762150] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:31.060 [2024-12-07 22:38:45.762430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71804 ] 00:07:31.319 [2024-12-07 22:38:45.898946] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.319 [2024-12-07 22:38:45.930551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.319 [2024-12-07 22:38:45.960091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:31.319  [2024-12-07T22:38:46.342Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:31.576 00:07:31.576 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:31.576 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:31.576 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:31.576 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:31.576 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:31.576 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:31.576 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.142 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:32.142 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:32.142 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.142 22:38:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.142 [2024-12-07 22:38:46.766372] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.142 [2024-12-07 22:38:46.766669] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71823 ] 00:07:32.142 { 00:07:32.142 "subsystems": [ 00:07:32.142 { 00:07:32.142 "subsystem": "bdev", 00:07:32.142 "config": [ 00:07:32.142 { 00:07:32.142 "params": { 00:07:32.142 "trtype": "pcie", 00:07:32.142 "traddr": "0000:00:10.0", 00:07:32.142 "name": "Nvme0" 00:07:32.142 }, 00:07:32.142 "method": "bdev_nvme_attach_controller" 00:07:32.142 }, 00:07:32.142 { 00:07:32.142 "method": "bdev_wait_for_examine" 00:07:32.142 } 00:07:32.142 ] 00:07:32.142 } 00:07:32.142 ] 00:07:32.142 } 00:07:32.142 [2024-12-07 22:38:46.900343] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.399 [2024-12-07 22:38:46.934037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.400 [2024-12-07 22:38:46.963745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.400  [2024-12-07T22:38:47.424Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:32.658 00:07:32.658 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:32.658 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:32.658 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.658 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:32.658 [2024-12-07 22:38:47.230955] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:32.658 [2024-12-07 22:38:47.231236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71837 ] 00:07:32.658 { 00:07:32.658 "subsystems": [ 00:07:32.658 { 00:07:32.658 "subsystem": "bdev", 00:07:32.658 "config": [ 00:07:32.658 { 00:07:32.658 "params": { 00:07:32.658 "trtype": "pcie", 00:07:32.658 "traddr": "0000:00:10.0", 00:07:32.658 "name": "Nvme0" 00:07:32.658 }, 00:07:32.658 "method": "bdev_nvme_attach_controller" 00:07:32.658 }, 00:07:32.658 { 00:07:32.658 "method": "bdev_wait_for_examine" 00:07:32.658 } 00:07:32.658 ] 00:07:32.658 } 00:07:32.658 ] 00:07:32.658 } 00:07:32.658 [2024-12-07 22:38:47.363754] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.658 [2024-12-07 22:38:47.405370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.916 [2024-12-07 22:38:47.435493] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.916  [2024-12-07T22:38:47.682Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:32.916 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:32.916 22:38:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.174 { 00:07:33.174 "subsystems": [ 00:07:33.174 { 00:07:33.174 "subsystem": "bdev", 00:07:33.174 "config": [ 00:07:33.174 { 00:07:33.174 "params": { 00:07:33.174 "trtype": "pcie", 00:07:33.174 "traddr": "0000:00:10.0", 00:07:33.174 "name": "Nvme0" 00:07:33.174 }, 00:07:33.174 "method": "bdev_nvme_attach_controller" 00:07:33.174 }, 00:07:33.174 { 00:07:33.174 "method": "bdev_wait_for_examine" 00:07:33.174 } 00:07:33.174 ] 00:07:33.174 } 00:07:33.174 ] 00:07:33.174 } 00:07:33.174 [2024-12-07 22:38:47.711721] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.174 [2024-12-07 22:38:47.712002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71852 ] 00:07:33.174 [2024-12-07 22:38:47.845741] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.174 [2024-12-07 22:38:47.879928] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.174 [2024-12-07 22:38:47.910486] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:33.432  [2024-12-07T22:38:48.198Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:33.432 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:33.432 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.998 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:33.998 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:33.998 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.998 22:38:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.998 [2024-12-07 22:38:48.704191] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:33.998 [2024-12-07 22:38:48.704476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71871 ] 00:07:33.998 { 00:07:33.998 "subsystems": [ 00:07:33.998 { 00:07:33.998 "subsystem": "bdev", 00:07:33.998 "config": [ 00:07:33.998 { 00:07:33.998 "params": { 00:07:33.998 "trtype": "pcie", 00:07:33.998 "traddr": "0000:00:10.0", 00:07:33.998 "name": "Nvme0" 00:07:33.998 }, 00:07:33.998 "method": "bdev_nvme_attach_controller" 00:07:33.998 }, 00:07:33.998 { 00:07:33.998 "method": "bdev_wait_for_examine" 00:07:33.998 } 00:07:33.998 ] 00:07:33.998 } 00:07:33.998 ] 00:07:33.998 } 00:07:34.257 [2024-12-07 22:38:48.841226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.257 [2024-12-07 22:38:48.872583] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.257 [2024-12-07 22:38:48.900550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.257  [2024-12-07T22:38:49.282Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:34.516 00:07:34.516 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:34.516 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:34.516 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:34.516 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:34.516 [2024-12-07 22:38:49.173983] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:34.516 [2024-12-07 22:38:49.174072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71885 ] 00:07:34.516 { 00:07:34.516 "subsystems": [ 00:07:34.516 { 00:07:34.516 "subsystem": "bdev", 00:07:34.516 "config": [ 00:07:34.516 { 00:07:34.516 "params": { 00:07:34.516 "trtype": "pcie", 00:07:34.516 "traddr": "0000:00:10.0", 00:07:34.516 "name": "Nvme0" 00:07:34.516 }, 00:07:34.516 "method": "bdev_nvme_attach_controller" 00:07:34.516 }, 00:07:34.516 { 00:07:34.516 "method": "bdev_wait_for_examine" 00:07:34.516 } 00:07:34.516 ] 00:07:34.516 } 00:07:34.516 ] 00:07:34.516 } 00:07:34.775 [2024-12-07 22:38:49.310120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.775 [2024-12-07 22:38:49.343174] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.775 [2024-12-07 22:38:49.370489] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.775  [2024-12-07T22:38:49.801Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:35.035 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.035 22:38:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.035 [2024-12-07 22:38:49.648643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:35.035 [2024-12-07 22:38:49.648737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71900 ] 00:07:35.035 { 00:07:35.035 "subsystems": [ 00:07:35.035 { 00:07:35.035 "subsystem": "bdev", 00:07:35.035 "config": [ 00:07:35.035 { 00:07:35.035 "params": { 00:07:35.035 "trtype": "pcie", 00:07:35.035 "traddr": "0000:00:10.0", 00:07:35.035 "name": "Nvme0" 00:07:35.035 }, 00:07:35.035 "method": "bdev_nvme_attach_controller" 00:07:35.035 }, 00:07:35.035 { 00:07:35.035 "method": "bdev_wait_for_examine" 00:07:35.035 } 00:07:35.035 ] 00:07:35.035 } 00:07:35.035 ] 00:07:35.035 } 00:07:35.035 [2024-12-07 22:38:49.785108] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.295 [2024-12-07 22:38:49.818380] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.295 [2024-12-07 22:38:49.845494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.295  [2024-12-07T22:38:50.320Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:07:35.555 00:07:35.555 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:35.555 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:35.555 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:35.555 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:35.555 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:35.555 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:35.555 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.124 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:36.124 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:36.124 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.124 22:38:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.124 [2024-12-07 22:38:50.664344] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.124 [2024-12-07 22:38:50.664627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71919 ] 00:07:36.124 { 00:07:36.124 "subsystems": [ 00:07:36.124 { 00:07:36.124 "subsystem": "bdev", 00:07:36.124 "config": [ 00:07:36.124 { 00:07:36.124 "params": { 00:07:36.124 "trtype": "pcie", 00:07:36.124 "traddr": "0000:00:10.0", 00:07:36.124 "name": "Nvme0" 00:07:36.124 }, 00:07:36.124 "method": "bdev_nvme_attach_controller" 00:07:36.124 }, 00:07:36.124 { 00:07:36.124 "method": "bdev_wait_for_examine" 00:07:36.124 } 00:07:36.124 ] 00:07:36.124 } 00:07:36.124 ] 00:07:36.124 } 00:07:36.124 [2024-12-07 22:38:50.792640] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.124 [2024-12-07 22:38:50.824326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.124 [2024-12-07 22:38:50.851757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.382  [2024-12-07T22:38:51.148Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:36.382 00:07:36.382 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:36.382 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:36.382 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.382 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.382 [2024-12-07 22:38:51.113724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.382 [2024-12-07 22:38:51.113852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71933 ] 00:07:36.382 { 00:07:36.382 "subsystems": [ 00:07:36.382 { 00:07:36.382 "subsystem": "bdev", 00:07:36.382 "config": [ 00:07:36.382 { 00:07:36.382 "params": { 00:07:36.382 "trtype": "pcie", 00:07:36.382 "traddr": "0000:00:10.0", 00:07:36.382 "name": "Nvme0" 00:07:36.382 }, 00:07:36.382 "method": "bdev_nvme_attach_controller" 00:07:36.382 }, 00:07:36.383 { 00:07:36.383 "method": "bdev_wait_for_examine" 00:07:36.383 } 00:07:36.383 ] 00:07:36.383 } 00:07:36.383 ] 00:07:36.383 } 00:07:36.641 [2024-12-07 22:38:51.242800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.641 [2024-12-07 22:38:51.277929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.641 [2024-12-07 22:38:51.305862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:36.641  [2024-12-07T22:38:51.666Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:36.900 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.900 22:38:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.900 [2024-12-07 22:38:51.592717] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:36.900 [2024-12-07 22:38:51.592873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71943 ] 00:07:36.900 { 00:07:36.900 "subsystems": [ 00:07:36.900 { 00:07:36.900 "subsystem": "bdev", 00:07:36.900 "config": [ 00:07:36.900 { 00:07:36.900 "params": { 00:07:36.900 "trtype": "pcie", 00:07:36.900 "traddr": "0000:00:10.0", 00:07:36.900 "name": "Nvme0" 00:07:36.900 }, 00:07:36.900 "method": "bdev_nvme_attach_controller" 00:07:36.900 }, 00:07:36.900 { 00:07:36.900 "method": "bdev_wait_for_examine" 00:07:36.900 } 00:07:36.900 ] 00:07:36.900 } 00:07:36.900 ] 00:07:36.900 } 00:07:37.158 [2024-12-07 22:38:51.723658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.158 [2024-12-07 22:38:51.757111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.158 [2024-12-07 22:38:51.784182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.158  [2024-12-07T22:38:52.183Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:37.417 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:37.417 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.984 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:37.984 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:37.984 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.984 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.984 [2024-12-07 22:38:52.495785] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:37.984 [2024-12-07 22:38:52.495903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71962 ] 00:07:37.984 { 00:07:37.984 "subsystems": [ 00:07:37.984 { 00:07:37.984 "subsystem": "bdev", 00:07:37.984 "config": [ 00:07:37.984 { 00:07:37.984 "params": { 00:07:37.984 "trtype": "pcie", 00:07:37.984 "traddr": "0000:00:10.0", 00:07:37.984 "name": "Nvme0" 00:07:37.984 }, 00:07:37.984 "method": "bdev_nvme_attach_controller" 00:07:37.984 }, 00:07:37.984 { 00:07:37.984 "method": "bdev_wait_for_examine" 00:07:37.984 } 00:07:37.984 ] 00:07:37.984 } 00:07:37.984 ] 00:07:37.984 } 00:07:37.984 [2024-12-07 22:38:52.629311] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.984 [2024-12-07 22:38:52.669649] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.984 [2024-12-07 22:38:52.702268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.243  [2024-12-07T22:38:53.009Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:38.243 00:07:38.243 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:38.243 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:38.243 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.243 22:38:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.243 [2024-12-07 22:38:52.974544] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.243 [2024-12-07 22:38:52.974643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71975 ] 00:07:38.243 { 00:07:38.243 "subsystems": [ 00:07:38.243 { 00:07:38.243 "subsystem": "bdev", 00:07:38.243 "config": [ 00:07:38.243 { 00:07:38.243 "params": { 00:07:38.243 "trtype": "pcie", 00:07:38.243 "traddr": "0000:00:10.0", 00:07:38.243 "name": "Nvme0" 00:07:38.243 }, 00:07:38.243 "method": "bdev_nvme_attach_controller" 00:07:38.243 }, 00:07:38.243 { 00:07:38.243 "method": "bdev_wait_for_examine" 00:07:38.243 } 00:07:38.243 ] 00:07:38.243 } 00:07:38.243 ] 00:07:38.243 } 00:07:38.501 [2024-12-07 22:38:53.110387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.501 [2024-12-07 22:38:53.145903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.501 [2024-12-07 22:38:53.172296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.501  [2024-12-07T22:38:53.525Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:38.759 00:07:38.759 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:38.759 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:38.759 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.760 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.760 [2024-12-07 22:38:53.437637] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:38.760 [2024-12-07 22:38:53.437740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71991 ] 00:07:38.760 { 00:07:38.760 "subsystems": [ 00:07:38.760 { 00:07:38.760 "subsystem": "bdev", 00:07:38.760 "config": [ 00:07:38.760 { 00:07:38.760 "params": { 00:07:38.760 "trtype": "pcie", 00:07:38.760 "traddr": "0000:00:10.0", 00:07:38.760 "name": "Nvme0" 00:07:38.760 }, 00:07:38.760 "method": "bdev_nvme_attach_controller" 00:07:38.760 }, 00:07:38.760 { 00:07:38.760 "method": "bdev_wait_for_examine" 00:07:38.760 } 00:07:38.760 ] 00:07:38.760 } 00:07:38.760 ] 00:07:38.760 } 00:07:39.018 [2024-12-07 22:38:53.569464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.018 [2024-12-07 22:38:53.599490] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.018 [2024-12-07 22:38:53.626638] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.018  [2024-12-07T22:38:54.043Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:39.277 00:07:39.277 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:39.277 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:39.277 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:39.277 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:39.277 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:39.277 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:39.277 22:38:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.540 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:39.540 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:39.540 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.540 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.824 [2024-12-07 22:38:54.345450] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:39.824 [2024-12-07 22:38:54.345550] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72010 ] 00:07:39.824 { 00:07:39.824 "subsystems": [ 00:07:39.824 { 00:07:39.824 "subsystem": "bdev", 00:07:39.824 "config": [ 00:07:39.824 { 00:07:39.824 "params": { 00:07:39.824 "trtype": "pcie", 00:07:39.824 "traddr": "0000:00:10.0", 00:07:39.824 "name": "Nvme0" 00:07:39.824 }, 00:07:39.824 "method": "bdev_nvme_attach_controller" 00:07:39.824 }, 00:07:39.824 { 00:07:39.824 "method": "bdev_wait_for_examine" 00:07:39.824 } 00:07:39.824 ] 00:07:39.824 } 00:07:39.824 ] 00:07:39.824 } 00:07:39.824 [2024-12-07 22:38:54.482379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.824 [2024-12-07 22:38:54.512970] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.824 [2024-12-07 22:38:54.539780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.111  [2024-12-07T22:38:54.877Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:40.111 00:07:40.111 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:40.111 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:40.111 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.111 22:38:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.111 [2024-12-07 22:38:54.807399] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.111 [2024-12-07 22:38:54.807496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72023 ] 00:07:40.111 { 00:07:40.111 "subsystems": [ 00:07:40.111 { 00:07:40.111 "subsystem": "bdev", 00:07:40.111 "config": [ 00:07:40.111 { 00:07:40.111 "params": { 00:07:40.111 "trtype": "pcie", 00:07:40.111 "traddr": "0000:00:10.0", 00:07:40.111 "name": "Nvme0" 00:07:40.111 }, 00:07:40.111 "method": "bdev_nvme_attach_controller" 00:07:40.111 }, 00:07:40.111 { 00:07:40.111 "method": "bdev_wait_for_examine" 00:07:40.111 } 00:07:40.111 ] 00:07:40.111 } 00:07:40.111 ] 00:07:40.111 } 00:07:40.381 [2024-12-07 22:38:54.949142] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.381 [2024-12-07 22:38:54.979460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.381 [2024-12-07 22:38:55.007316] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.381  [2024-12-07T22:38:55.405Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:40.639 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.639 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.639 [2024-12-07 22:38:55.281286] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:40.639 [2024-12-07 22:38:55.281393] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72039 ] 00:07:40.639 { 00:07:40.639 "subsystems": [ 00:07:40.639 { 00:07:40.639 "subsystem": "bdev", 00:07:40.639 "config": [ 00:07:40.639 { 00:07:40.639 "params": { 00:07:40.639 "trtype": "pcie", 00:07:40.639 "traddr": "0000:00:10.0", 00:07:40.639 "name": "Nvme0" 00:07:40.639 }, 00:07:40.639 "method": "bdev_nvme_attach_controller" 00:07:40.639 }, 00:07:40.640 { 00:07:40.640 "method": "bdev_wait_for_examine" 00:07:40.640 } 00:07:40.640 ] 00:07:40.640 } 00:07:40.640 ] 00:07:40.640 } 00:07:40.897 [2024-12-07 22:38:55.417150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.897 [2024-12-07 22:38:55.448747] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.897 [2024-12-07 22:38:55.475353] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.897  [2024-12-07T22:38:55.921Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:41.155 00:07:41.155 00:07:41.155 real 0m11.550s 00:07:41.155 user 0m8.523s 00:07:41.155 sys 0m3.638s 00:07:41.155 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:41.155 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.155 ************************************ 00:07:41.155 END TEST dd_rw 00:07:41.155 ************************************ 00:07:41.155 22:38:55 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.156 ************************************ 00:07:41.156 START TEST dd_rw_offset 00:07:41.156 ************************************ 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=e2ro8wbm643z8lzmbnhloz9aicw9fo1n5izf9bh8uvgxhla0hsvnzxwjnv8txbxm4fe9z0o3q2a78dxrsy37zlzt2s3i0g5824abeal5ar7inaom6p8n6jvo0zklrr8ht7bop7hj60nokq3i8c52oq4mu8ytl1xlfualxokf2ie6tt5rp8fbgnuj8j9bml2pe82d7yt9813qxlf18iehldqtdcb4rowab4pmmnqirz5961csh9yptagze4p9sqensx61o8yghbvfiwx1el043cm0m88ifn2nfabc0wakrhtfcl0l0wvtm6onkxn1s1qipl0tryqyeke1qwxbo5kr2dboyggnrogcrt55p503u7ozjh9yknza37bgb1x6mfybv54b3xof6r8wuk0au3zx81o1jze7otx6mpfiv2r3ml1hw0hqisl0tycdjckt0l1v3r7kbjx9os6ytbc7kofqtdoxh1nfw50ub9a2l4cj4bhqc4r31ifidy296asg26dy7d7yxw3licat46954p73k7h5umvzkde5dxeqwkf0dc3w1do55ae9djbs78zyeevdxqtsd2yzth75swd63x9z332whjqs12bkerfaihsa642a9967o7uyundrmmfctonc4wzkap36eklqnjm1o7r3je3vdfpop981x6pm8w6rp4f6z5vxxayd9yvn61bmhs0mvtpu0kj42e6trcj64qlp7xe9wdgaxjqq2y2d3tw3if31vg6vju4rv099hk0zy56lx6cq673789i43l3bjigqh6gbxezsvmj3ka3nxu7u64tlietfivbrxnq8f1u49sayslf11451u8skqht3yzn9pvi3t075wvhocwknsiu9sj3y544mhqfd4yfhtw1g5elot3nbkjmqi27i7kjh6592yotssmctsgot74c58gey97e3ad9ytujks4bv0tl28llj1gaz1piidohb8zl5usig6ll3q2rfev71zty5xngli5ieu1cjtbsq34olxucqbtft5tnueb4r1ma2jcrbmsvpyuwpv5fp1sngztphzi94jyfwtl219q5b65kdef5jnygq6np5biwe8d91619zc2o07fb0e6u0w89j0lb78cn3jbdf91g95k7o660b0v55h3eqfeoa8x6xz50n5jc3sg2x0usjg0dtiaiojd6pjo8rtgajn3ke4y525h50enjt45ack5iqh6o1s2dw3l593dl7klr6ayc2eds1z1t5721sk9ybsn1eebk9vnv1t3ewtmb1rvzw9er5wxinrnawk4b5xlslgaohodjphua7f7y78s7nn7079qjme1b9sm9mvyq9f7xqu5o99hap7o7qktbw4ouwl2hla2b6hh2pm2ywhnkzfrlnz3rt92s4cnrlmofiur8iqfbcl6xol2k64myhiio4vrjc4iemfpzrucvx00856xtxgsaunf8l4imrlrecvm0baxz98up1ivdywqr69cg94ed7l377jle6il28kno3lrj8r2c84l1k3scxkspzorq9qhtjqgpcgwknxah8izdobjdafjlohdlvo0hb5e3bzru4ifeq5h7bh562cta6ryibxqs0z3ueo72rqahoecbm3io9wr0avpsd3mhcq3q8h8mjvgbo2hjpw3u782ae4kkb1fviicao8yb08mk7jth5sgehu3jvmvvq47rfz7bgyeeikw420jaxlrvv18n6jvilmscleqxr16q64i2qlpivvthl5o1qwaqatisuzp8ly7eflf15patlw4id2jgmlnxwe7v1rok07o3rqfzn374xegzaobqgxt2scodon717rna1y19840k42apiqiw58radlvbh256tj1b6txns6qz6uanli85vsk7t6pm0jq3arnd23ls5ybcqr8ovlr4094bk2crdox03y8h3m9i5p4drfokka37gyi0sqb965mdaaafy4cgqer35tuoqvwwq93uo2hxa9wpbdfsof0wv1rqkpu3he9t7hqfv66x8tynlt7frty6zeqepx6rtqpcdtsezpsi7qr5kcrhd04ft50mddo65pnmakpvkkklyvbjwp3wmqdo2ymacn1r2shylwu250wb0s7xvxc0tbkmjkt1x878b12mqyrq9o35ugrks9lhs15qsxj3bwurxlye5cqxvtevrpgr9wpqyi2oc25bi14yvct7pu7oqyxuujn5mviokisg183ysnk71ldzi3bvvc6n6rqgh2egxysb8eh4w0wicd6yy0ymjmuke005gdeays9zhkip96fm9ez0egiq3s35kcfj9vn30gzzu7rq95al0kqan6mz3kairpm1x99wf45j764clcz3dl8jg77gjnb48qbqpbkksk4y4qi37rv224vitzw98hd0qs9a1p5m5n52pmqksjg8isqpjnebrcj2e8p4vxevgev6v0cckb780ssuw097ea2csbbc8arknlfc61kilk70gxqflz0i31pyzsr8bodbbtcratjonzporbj4rtwt40kmy10irivaxszu8soc1hpu8igy53jlxrjqyes85pb5sv4qvf6j79bqnathg5jr88f0bzinlnvtpe03ogs9llq9noyxlpnwx6y6rhabzig0nf3kr8q8e9w7oapg1tz51nxqxarihf9ruee7q5u1w3syqor7cemwizbruyuq5495744ah1lkro0vb4l0epouczuaps0j1wt6r7t8p7nnoyf2ueb35ijgvwcmvbbrnnpxopdp4d7y736m5x48zfktapqfadqv7atvpe8s609hfyfxvufmy39t1wmpp9yhmisqf8iu27ls1ps85t4k81py70dtlq9ih18r2643ygxrzbns4xavh0rlqy845x4vwkbny3ljgqhd4hokew54zan5q48i3t5nuj0hyw4tkkfllv5ao4dfibce85ruy9mgie2y0sig5uhq21g9tzenm3f4eaig7vuw69vf7q4zn0fo8ywxbpxvzdxs3l2eo1rdo0205iq2vdyl48jpmri8b2b04nmvnn3kewb17duplddwby1t8eezoxz61zhjv281szrbm199vh5jjp7fzmshmha5i92uf39f5x7vsophil5vppcxuq4isd0lyplnkm8nnvk3y5asow4ph85rcz3pz2u9vnij3axo5tkvwsrslsn5tc8yqbf7xjqbtekikhbxm7thk1a2vnh0rio5p6nyh26532pm04008m8t11ainhkilhqscooj6jug0rehez1pc0jsn38cijrge8uhtxakrn0zspbo6keuqql4oaj1t3jycorxlun159fjsrp8z9e0bk75wbhsj9y64yrwoitjt78pzre1pwfqaa8w4tjrfd2ayzk0xfdyz47vsg9wbykt5dpx2o99upvfh734yiiopu7vgh1bmdmwrnnzv2kgtyw3vtd45h7c0t87tfd39m74uokslklo59otmuzcxlqjtryyk02zor13qm91xxa0h559pn7n7ogt6ptj959262ozqf5lo665cxoeo1eppmk4gpsbtnxbt8m67irognyrmclc3hgdzi42vdiagxqqf2ln475m3ymjktxyuo4yuplze9fgno04cn17dbzbhdbcv2kggc1z4xrkgwfpj05whhdgkodqtre9h909a24c8iftm5fzn52ldydhhxz0fmt65961tegup6wzxa0cr4a6wufy2b75lwrxxahry6zxbvmahwd2qukzfzauv75e5k91baz5rcsd4yg05ci9zevbqsoodcxa46v63uahyxvrv4cgzf2ews4vjgqdra3ppdzm7e59t8oq5yjrjzerchzd3z2600pwwdqykh4a0u40bw4m3sjh5wpo18ji9xjmkgle09yvly9knvhjo4iikj1olonskhthl1lycub5hiwg91zye5ia2866tpm16ijiendg5dnke5swckx8jvflyo5i9b9ovcoez54bk6uzq1qc9ytlndu8nfz9e8qzikf7d7poc0sj2hff2pcaq8v65ikvaikoi2kx0e5cequybbymep57n8w05zv2sau8n55to6rilnx9yxg8nz9ng 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:41.156 22:38:55 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:41.156 [2024-12-07 22:38:55.854963] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.156 [2024-12-07 22:38:55.855052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72069 ] 00:07:41.156 { 00:07:41.156 "subsystems": [ 00:07:41.156 { 00:07:41.156 "subsystem": "bdev", 00:07:41.156 "config": [ 00:07:41.156 { 00:07:41.156 "params": { 00:07:41.156 "trtype": "pcie", 00:07:41.156 "traddr": "0000:00:10.0", 00:07:41.156 "name": "Nvme0" 00:07:41.156 }, 00:07:41.156 "method": "bdev_nvme_attach_controller" 00:07:41.156 }, 00:07:41.156 { 00:07:41.156 "method": "bdev_wait_for_examine" 00:07:41.156 } 00:07:41.156 ] 00:07:41.156 } 00:07:41.156 ] 00:07:41.156 } 00:07:41.425 [2024-12-07 22:38:55.990833] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.425 [2024-12-07 22:38:56.021299] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.425 [2024-12-07 22:38:56.048632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.425  [2024-12-07T22:38:56.449Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:41.683 00:07:41.683 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:41.683 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:41.683 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:41.683 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:41.683 [2024-12-07 22:38:56.318715] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:41.683 [2024-12-07 22:38:56.318826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72083 ] 00:07:41.683 { 00:07:41.683 "subsystems": [ 00:07:41.683 { 00:07:41.683 "subsystem": "bdev", 00:07:41.683 "config": [ 00:07:41.683 { 00:07:41.683 "params": { 00:07:41.683 "trtype": "pcie", 00:07:41.683 "traddr": "0000:00:10.0", 00:07:41.683 "name": "Nvme0" 00:07:41.683 }, 00:07:41.683 "method": "bdev_nvme_attach_controller" 00:07:41.683 }, 00:07:41.683 { 00:07:41.683 "method": "bdev_wait_for_examine" 00:07:41.683 } 00:07:41.683 ] 00:07:41.683 } 00:07:41.683 ] 00:07:41.683 } 00:07:41.942 [2024-12-07 22:38:56.453679] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.942 [2024-12-07 22:38:56.484599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.942 [2024-12-07 22:38:56.511372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.942  [2024-12-07T22:38:56.967Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:42.201 00:07:42.201 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ e2ro8wbm643z8lzmbnhloz9aicw9fo1n5izf9bh8uvgxhla0hsvnzxwjnv8txbxm4fe9z0o3q2a78dxrsy37zlzt2s3i0g5824abeal5ar7inaom6p8n6jvo0zklrr8ht7bop7hj60nokq3i8c52oq4mu8ytl1xlfualxokf2ie6tt5rp8fbgnuj8j9bml2pe82d7yt9813qxlf18iehldqtdcb4rowab4pmmnqirz5961csh9yptagze4p9sqensx61o8yghbvfiwx1el043cm0m88ifn2nfabc0wakrhtfcl0l0wvtm6onkxn1s1qipl0tryqyeke1qwxbo5kr2dboyggnrogcrt55p503u7ozjh9yknza37bgb1x6mfybv54b3xof6r8wuk0au3zx81o1jze7otx6mpfiv2r3ml1hw0hqisl0tycdjckt0l1v3r7kbjx9os6ytbc7kofqtdoxh1nfw50ub9a2l4cj4bhqc4r31ifidy296asg26dy7d7yxw3licat46954p73k7h5umvzkde5dxeqwkf0dc3w1do55ae9djbs78zyeevdxqtsd2yzth75swd63x9z332whjqs12bkerfaihsa642a9967o7uyundrmmfctonc4wzkap36eklqnjm1o7r3je3vdfpop981x6pm8w6rp4f6z5vxxayd9yvn61bmhs0mvtpu0kj42e6trcj64qlp7xe9wdgaxjqq2y2d3tw3if31vg6vju4rv099hk0zy56lx6cq673789i43l3bjigqh6gbxezsvmj3ka3nxu7u64tlietfivbrxnq8f1u49sayslf11451u8skqht3yzn9pvi3t075wvhocwknsiu9sj3y544mhqfd4yfhtw1g5elot3nbkjmqi27i7kjh6592yotssmctsgot74c58gey97e3ad9ytujks4bv0tl28llj1gaz1piidohb8zl5usig6ll3q2rfev71zty5xngli5ieu1cjtbsq34olxucqbtft5tnueb4r1ma2jcrbmsvpyuwpv5fp1sngztphzi94jyfwtl219q5b65kdef5jnygq6np5biwe8d91619zc2o07fb0e6u0w89j0lb78cn3jbdf91g95k7o660b0v55h3eqfeoa8x6xz50n5jc3sg2x0usjg0dtiaiojd6pjo8rtgajn3ke4y525h50enjt45ack5iqh6o1s2dw3l593dl7klr6ayc2eds1z1t5721sk9ybsn1eebk9vnv1t3ewtmb1rvzw9er5wxinrnawk4b5xlslgaohodjphua7f7y78s7nn7079qjme1b9sm9mvyq9f7xqu5o99hap7o7qktbw4ouwl2hla2b6hh2pm2ywhnkzfrlnz3rt92s4cnrlmofiur8iqfbcl6xol2k64myhiio4vrjc4iemfpzrucvx00856xtxgsaunf8l4imrlrecvm0baxz98up1ivdywqr69cg94ed7l377jle6il28kno3lrj8r2c84l1k3scxkspzorq9qhtjqgpcgwknxah8izdobjdafjlohdlvo0hb5e3bzru4ifeq5h7bh562cta6ryibxqs0z3ueo72rqahoecbm3io9wr0avpsd3mhcq3q8h8mjvgbo2hjpw3u782ae4kkb1fviicao8yb08mk7jth5sgehu3jvmvvq47rfz7bgyeeikw420jaxlrvv18n6jvilmscleqxr16q64i2qlpivvthl5o1qwaqatisuzp8ly7eflf15patlw4id2jgmlnxwe7v1rok07o3rqfzn374xegzaobqgxt2scodon717rna1y19840k42apiqiw58radlvbh256tj1b6txns6qz6uanli85vsk7t6pm0jq3arnd23ls5ybcqr8ovlr4094bk2crdox03y8h3m9i5p4drfokka37gyi0sqb965mdaaafy4cgqer35tuoqvwwq93uo2hxa9wpbdfsof0wv1rqkpu3he9t7hqfv66x8tynlt7frty6zeqepx6rtqpcdtsezpsi7qr5kcrhd04ft50mddo65pnmakpvkkklyvbjwp3wmqdo2ymacn1r2shylwu250wb0s7xvxc0tbkmjkt1x878b12mqyrq9o35ugrks9lhs15qsxj3bwurxlye5cqxvtevrpgr9wpqyi2oc25bi14yvct7pu7oqyxuujn5mviokisg183ysnk71ldzi3bvvc6n6rqgh2egxysb8eh4w0wicd6yy0ymjmuke005gdeays9zhkip96fm9ez0egiq3s35kcfj9vn30gzzu7rq95al0kqan6mz3kairpm1x99wf45j764clcz3dl8jg77gjnb48qbqpbkksk4y4qi37rv224vitzw98hd0qs9a1p5m5n52pmqksjg8isqpjnebrcj2e8p4vxevgev6v0cckb780ssuw097ea2csbbc8arknlfc61kilk70gxqflz0i31pyzsr8bodbbtcratjonzporbj4rtwt40kmy10irivaxszu8soc1hpu8igy53jlxrjqyes85pb5sv4qvf6j79bqnathg5jr88f0bzinlnvtpe03ogs9llq9noyxlpnwx6y6rhabzig0nf3kr8q8e9w7oapg1tz51nxqxarihf9ruee7q5u1w3syqor7cemwizbruyuq5495744ah1lkro0vb4l0epouczuaps0j1wt6r7t8p7nnoyf2ueb35ijgvwcmvbbrnnpxopdp4d7y736m5x48zfktapqfadqv7atvpe8s609hfyfxvufmy39t1wmpp9yhmisqf8iu27ls1ps85t4k81py70dtlq9ih18r2643ygxrzbns4xavh0rlqy845x4vwkbny3ljgqhd4hokew54zan5q48i3t5nuj0hyw4tkkfllv5ao4dfibce85ruy9mgie2y0sig5uhq21g9tzenm3f4eaig7vuw69vf7q4zn0fo8ywxbpxvzdxs3l2eo1rdo0205iq2vdyl48jpmri8b2b04nmvnn3kewb17duplddwby1t8eezoxz61zhjv281szrbm199vh5jjp7fzmshmha5i92uf39f5x7vsophil5vppcxuq4isd0lyplnkm8nnvk3y5asow4ph85rcz3pz2u9vnij3axo5tkvwsrslsn5tc8yqbf7xjqbtekikhbxm7thk1a2vnh0rio5p6nyh26532pm04008m8t11ainhkilhqscooj6jug0rehez1pc0jsn38cijrge8uhtxakrn0zspbo6keuqql4oaj1t3jycorxlun159fjsrp8z9e0bk75wbhsj9y64yrwoitjt78pzre1pwfqaa8w4tjrfd2ayzk0xfdyz47vsg9wbykt5dpx2o99upvfh734yiiopu7vgh1bmdmwrnnzv2kgtyw3vtd45h7c0t87tfd39m74uokslklo59otmuzcxlqjtryyk02zor13qm91xxa0h559pn7n7ogt6ptj959262ozqf5lo665cxoeo1eppmk4gpsbtnxbt8m67irognyrmclc3hgdzi42vdiagxqqf2ln475m3ymjktxyuo4yuplze9fgno04cn17dbzbhdbcv2kggc1z4xrkgwfpj05whhdgkodqtre9h909a24c8iftm5fzn52ldydhhxz0fmt65961tegup6wzxa0cr4a6wufy2b75lwrxxahry6zxbvmahwd2qukzfzauv75e5k91baz5rcsd4yg05ci9zevbqsoodcxa46v63uahyxvrv4cgzf2ews4vjgqdra3ppdzm7e59t8oq5yjrjzerchzd3z2600pwwdqykh4a0u40bw4m3sjh5wpo18ji9xjmkgle09yvly9knvhjo4iikj1olonskhthl1lycub5hiwg91zye5ia2866tpm16ijiendg5dnke5swckx8jvflyo5i9b9ovcoez54bk6uzq1qc9ytlndu8nfz9e8qzikf7d7poc0sj2hff2pcaq8v65ikvaikoi2kx0e5cequybbymep57n8w05zv2sau8n55to6rilnx9yxg8nz9ng == \e\2\r\o\8\w\b\m\6\4\3\z\8\l\z\m\b\n\h\l\o\z\9\a\i\c\w\9\f\o\1\n\5\i\z\f\9\b\h\8\u\v\g\x\h\l\a\0\h\s\v\n\z\x\w\j\n\v\8\t\x\b\x\m\4\f\e\9\z\0\o\3\q\2\a\7\8\d\x\r\s\y\3\7\z\l\z\t\2\s\3\i\0\g\5\8\2\4\a\b\e\a\l\5\a\r\7\i\n\a\o\m\6\p\8\n\6\j\v\o\0\z\k\l\r\r\8\h\t\7\b\o\p\7\h\j\6\0\n\o\k\q\3\i\8\c\5\2\o\q\4\m\u\8\y\t\l\1\x\l\f\u\a\l\x\o\k\f\2\i\e\6\t\t\5\r\p\8\f\b\g\n\u\j\8\j\9\b\m\l\2\p\e\8\2\d\7\y\t\9\8\1\3\q\x\l\f\1\8\i\e\h\l\d\q\t\d\c\b\4\r\o\w\a\b\4\p\m\m\n\q\i\r\z\5\9\6\1\c\s\h\9\y\p\t\a\g\z\e\4\p\9\s\q\e\n\s\x\6\1\o\8\y\g\h\b\v\f\i\w\x\1\e\l\0\4\3\c\m\0\m\8\8\i\f\n\2\n\f\a\b\c\0\w\a\k\r\h\t\f\c\l\0\l\0\w\v\t\m\6\o\n\k\x\n\1\s\1\q\i\p\l\0\t\r\y\q\y\e\k\e\1\q\w\x\b\o\5\k\r\2\d\b\o\y\g\g\n\r\o\g\c\r\t\5\5\p\5\0\3\u\7\o\z\j\h\9\y\k\n\z\a\3\7\b\g\b\1\x\6\m\f\y\b\v\5\4\b\3\x\o\f\6\r\8\w\u\k\0\a\u\3\z\x\8\1\o\1\j\z\e\7\o\t\x\6\m\p\f\i\v\2\r\3\m\l\1\h\w\0\h\q\i\s\l\0\t\y\c\d\j\c\k\t\0\l\1\v\3\r\7\k\b\j\x\9\o\s\6\y\t\b\c\7\k\o\f\q\t\d\o\x\h\1\n\f\w\5\0\u\b\9\a\2\l\4\c\j\4\b\h\q\c\4\r\3\1\i\f\i\d\y\2\9\6\a\s\g\2\6\d\y\7\d\7\y\x\w\3\l\i\c\a\t\4\6\9\5\4\p\7\3\k\7\h\5\u\m\v\z\k\d\e\5\d\x\e\q\w\k\f\0\d\c\3\w\1\d\o\5\5\a\e\9\d\j\b\s\7\8\z\y\e\e\v\d\x\q\t\s\d\2\y\z\t\h\7\5\s\w\d\6\3\x\9\z\3\3\2\w\h\j\q\s\1\2\b\k\e\r\f\a\i\h\s\a\6\4\2\a\9\9\6\7\o\7\u\y\u\n\d\r\m\m\f\c\t\o\n\c\4\w\z\k\a\p\3\6\e\k\l\q\n\j\m\1\o\7\r\3\j\e\3\v\d\f\p\o\p\9\8\1\x\6\p\m\8\w\6\r\p\4\f\6\z\5\v\x\x\a\y\d\9\y\v\n\6\1\b\m\h\s\0\m\v\t\p\u\0\k\j\4\2\e\6\t\r\c\j\6\4\q\l\p\7\x\e\9\w\d\g\a\x\j\q\q\2\y\2\d\3\t\w\3\i\f\3\1\v\g\6\v\j\u\4\r\v\0\9\9\h\k\0\z\y\5\6\l\x\6\c\q\6\7\3\7\8\9\i\4\3\l\3\b\j\i\g\q\h\6\g\b\x\e\z\s\v\m\j\3\k\a\3\n\x\u\7\u\6\4\t\l\i\e\t\f\i\v\b\r\x\n\q\8\f\1\u\4\9\s\a\y\s\l\f\1\1\4\5\1\u\8\s\k\q\h\t\3\y\z\n\9\p\v\i\3\t\0\7\5\w\v\h\o\c\w\k\n\s\i\u\9\s\j\3\y\5\4\4\m\h\q\f\d\4\y\f\h\t\w\1\g\5\e\l\o\t\3\n\b\k\j\m\q\i\2\7\i\7\k\j\h\6\5\9\2\y\o\t\s\s\m\c\t\s\g\o\t\7\4\c\5\8\g\e\y\9\7\e\3\a\d\9\y\t\u\j\k\s\4\b\v\0\t\l\2\8\l\l\j\1\g\a\z\1\p\i\i\d\o\h\b\8\z\l\5\u\s\i\g\6\l\l\3\q\2\r\f\e\v\7\1\z\t\y\5\x\n\g\l\i\5\i\e\u\1\c\j\t\b\s\q\3\4\o\l\x\u\c\q\b\t\f\t\5\t\n\u\e\b\4\r\1\m\a\2\j\c\r\b\m\s\v\p\y\u\w\p\v\5\f\p\1\s\n\g\z\t\p\h\z\i\9\4\j\y\f\w\t\l\2\1\9\q\5\b\6\5\k\d\e\f\5\j\n\y\g\q\6\n\p\5\b\i\w\e\8\d\9\1\6\1\9\z\c\2\o\0\7\f\b\0\e\6\u\0\w\8\9\j\0\l\b\7\8\c\n\3\j\b\d\f\9\1\g\9\5\k\7\o\6\6\0\b\0\v\5\5\h\3\e\q\f\e\o\a\8\x\6\x\z\5\0\n\5\j\c\3\s\g\2\x\0\u\s\j\g\0\d\t\i\a\i\o\j\d\6\p\j\o\8\r\t\g\a\j\n\3\k\e\4\y\5\2\5\h\5\0\e\n\j\t\4\5\a\c\k\5\i\q\h\6\o\1\s\2\d\w\3\l\5\9\3\d\l\7\k\l\r\6\a\y\c\2\e\d\s\1\z\1\t\5\7\2\1\s\k\9\y\b\s\n\1\e\e\b\k\9\v\n\v\1\t\3\e\w\t\m\b\1\r\v\z\w\9\e\r\5\w\x\i\n\r\n\a\w\k\4\b\5\x\l\s\l\g\a\o\h\o\d\j\p\h\u\a\7\f\7\y\7\8\s\7\n\n\7\0\7\9\q\j\m\e\1\b\9\s\m\9\m\v\y\q\9\f\7\x\q\u\5\o\9\9\h\a\p\7\o\7\q\k\t\b\w\4\o\u\w\l\2\h\l\a\2\b\6\h\h\2\p\m\2\y\w\h\n\k\z\f\r\l\n\z\3\r\t\9\2\s\4\c\n\r\l\m\o\f\i\u\r\8\i\q\f\b\c\l\6\x\o\l\2\k\6\4\m\y\h\i\i\o\4\v\r\j\c\4\i\e\m\f\p\z\r\u\c\v\x\0\0\8\5\6\x\t\x\g\s\a\u\n\f\8\l\4\i\m\r\l\r\e\c\v\m\0\b\a\x\z\9\8\u\p\1\i\v\d\y\w\q\r\6\9\c\g\9\4\e\d\7\l\3\7\7\j\l\e\6\i\l\2\8\k\n\o\3\l\r\j\8\r\2\c\8\4\l\1\k\3\s\c\x\k\s\p\z\o\r\q\9\q\h\t\j\q\g\p\c\g\w\k\n\x\a\h\8\i\z\d\o\b\j\d\a\f\j\l\o\h\d\l\v\o\0\h\b\5\e\3\b\z\r\u\4\i\f\e\q\5\h\7\b\h\5\6\2\c\t\a\6\r\y\i\b\x\q\s\0\z\3\u\e\o\7\2\r\q\a\h\o\e\c\b\m\3\i\o\9\w\r\0\a\v\p\s\d\3\m\h\c\q\3\q\8\h\8\m\j\v\g\b\o\2\h\j\p\w\3\u\7\8\2\a\e\4\k\k\b\1\f\v\i\i\c\a\o\8\y\b\0\8\m\k\7\j\t\h\5\s\g\e\h\u\3\j\v\m\v\v\q\4\7\r\f\z\7\b\g\y\e\e\i\k\w\4\2\0\j\a\x\l\r\v\v\1\8\n\6\j\v\i\l\m\s\c\l\e\q\x\r\1\6\q\6\4\i\2\q\l\p\i\v\v\t\h\l\5\o\1\q\w\a\q\a\t\i\s\u\z\p\8\l\y\7\e\f\l\f\1\5\p\a\t\l\w\4\i\d\2\j\g\m\l\n\x\w\e\7\v\1\r\o\k\0\7\o\3\r\q\f\z\n\3\7\4\x\e\g\z\a\o\b\q\g\x\t\2\s\c\o\d\o\n\7\1\7\r\n\a\1\y\1\9\8\4\0\k\4\2\a\p\i\q\i\w\5\8\r\a\d\l\v\b\h\2\5\6\t\j\1\b\6\t\x\n\s\6\q\z\6\u\a\n\l\i\8\5\v\s\k\7\t\6\p\m\0\j\q\3\a\r\n\d\2\3\l\s\5\y\b\c\q\r\8\o\v\l\r\4\0\9\4\b\k\2\c\r\d\o\x\0\3\y\8\h\3\m\9\i\5\p\4\d\r\f\o\k\k\a\3\7\g\y\i\0\s\q\b\9\6\5\m\d\a\a\a\f\y\4\c\g\q\e\r\3\5\t\u\o\q\v\w\w\q\9\3\u\o\2\h\x\a\9\w\p\b\d\f\s\o\f\0\w\v\1\r\q\k\p\u\3\h\e\9\t\7\h\q\f\v\6\6\x\8\t\y\n\l\t\7\f\r\t\y\6\z\e\q\e\p\x\6\r\t\q\p\c\d\t\s\e\z\p\s\i\7\q\r\5\k\c\r\h\d\0\4\f\t\5\0\m\d\d\o\6\5\p\n\m\a\k\p\v\k\k\k\l\y\v\b\j\w\p\3\w\m\q\d\o\2\y\m\a\c\n\1\r\2\s\h\y\l\w\u\2\5\0\w\b\0\s\7\x\v\x\c\0\t\b\k\m\j\k\t\1\x\8\7\8\b\1\2\m\q\y\r\q\9\o\3\5\u\g\r\k\s\9\l\h\s\1\5\q\s\x\j\3\b\w\u\r\x\l\y\e\5\c\q\x\v\t\e\v\r\p\g\r\9\w\p\q\y\i\2\o\c\2\5\b\i\1\4\y\v\c\t\7\p\u\7\o\q\y\x\u\u\j\n\5\m\v\i\o\k\i\s\g\1\8\3\y\s\n\k\7\1\l\d\z\i\3\b\v\v\c\6\n\6\r\q\g\h\2\e\g\x\y\s\b\8\e\h\4\w\0\w\i\c\d\6\y\y\0\y\m\j\m\u\k\e\0\0\5\g\d\e\a\y\s\9\z\h\k\i\p\9\6\f\m\9\e\z\0\e\g\i\q\3\s\3\5\k\c\f\j\9\v\n\3\0\g\z\z\u\7\r\q\9\5\a\l\0\k\q\a\n\6\m\z\3\k\a\i\r\p\m\1\x\9\9\w\f\4\5\j\7\6\4\c\l\c\z\3\d\l\8\j\g\7\7\g\j\n\b\4\8\q\b\q\p\b\k\k\s\k\4\y\4\q\i\3\7\r\v\2\2\4\v\i\t\z\w\9\8\h\d\0\q\s\9\a\1\p\5\m\5\n\5\2\p\m\q\k\s\j\g\8\i\s\q\p\j\n\e\b\r\c\j\2\e\8\p\4\v\x\e\v\g\e\v\6\v\0\c\c\k\b\7\8\0\s\s\u\w\0\9\7\e\a\2\c\s\b\b\c\8\a\r\k\n\l\f\c\6\1\k\i\l\k\7\0\g\x\q\f\l\z\0\i\3\1\p\y\z\s\r\8\b\o\d\b\b\t\c\r\a\t\j\o\n\z\p\o\r\b\j\4\r\t\w\t\4\0\k\m\y\1\0\i\r\i\v\a\x\s\z\u\8\s\o\c\1\h\p\u\8\i\g\y\5\3\j\l\x\r\j\q\y\e\s\8\5\p\b\5\s\v\4\q\v\f\6\j\7\9\b\q\n\a\t\h\g\5\j\r\8\8\f\0\b\z\i\n\l\n\v\t\p\e\0\3\o\g\s\9\l\l\q\9\n\o\y\x\l\p\n\w\x\6\y\6\r\h\a\b\z\i\g\0\n\f\3\k\r\8\q\8\e\9\w\7\o\a\p\g\1\t\z\5\1\n\x\q\x\a\r\i\h\f\9\r\u\e\e\7\q\5\u\1\w\3\s\y\q\o\r\7\c\e\m\w\i\z\b\r\u\y\u\q\5\4\9\5\7\4\4\a\h\1\l\k\r\o\0\v\b\4\l\0\e\p\o\u\c\z\u\a\p\s\0\j\1\w\t\6\r\7\t\8\p\7\n\n\o\y\f\2\u\e\b\3\5\i\j\g\v\w\c\m\v\b\b\r\n\n\p\x\o\p\d\p\4\d\7\y\7\3\6\m\5\x\4\8\z\f\k\t\a\p\q\f\a\d\q\v\7\a\t\v\p\e\8\s\6\0\9\h\f\y\f\x\v\u\f\m\y\3\9\t\1\w\m\p\p\9\y\h\m\i\s\q\f\8\i\u\2\7\l\s\1\p\s\8\5\t\4\k\8\1\p\y\7\0\d\t\l\q\9\i\h\1\8\r\2\6\4\3\y\g\x\r\z\b\n\s\4\x\a\v\h\0\r\l\q\y\8\4\5\x\4\v\w\k\b\n\y\3\l\j\g\q\h\d\4\h\o\k\e\w\5\4\z\a\n\5\q\4\8\i\3\t\5\n\u\j\0\h\y\w\4\t\k\k\f\l\l\v\5\a\o\4\d\f\i\b\c\e\8\5\r\u\y\9\m\g\i\e\2\y\0\s\i\g\5\u\h\q\2\1\g\9\t\z\e\n\m\3\f\4\e\a\i\g\7\v\u\w\6\9\v\f\7\q\4\z\n\0\f\o\8\y\w\x\b\p\x\v\z\d\x\s\3\l\2\e\o\1\r\d\o\0\2\0\5\i\q\2\v\d\y\l\4\8\j\p\m\r\i\8\b\2\b\0\4\n\m\v\n\n\3\k\e\w\b\1\7\d\u\p\l\d\d\w\b\y\1\t\8\e\e\z\o\x\z\6\1\z\h\j\v\2\8\1\s\z\r\b\m\1\9\9\v\h\5\j\j\p\7\f\z\m\s\h\m\h\a\5\i\9\2\u\f\3\9\f\5\x\7\v\s\o\p\h\i\l\5\v\p\p\c\x\u\q\4\i\s\d\0\l\y\p\l\n\k\m\8\n\n\v\k\3\y\5\a\s\o\w\4\p\h\8\5\r\c\z\3\p\z\2\u\9\v\n\i\j\3\a\x\o\5\t\k\v\w\s\r\s\l\s\n\5\t\c\8\y\q\b\f\7\x\j\q\b\t\e\k\i\k\h\b\x\m\7\t\h\k\1\a\2\v\n\h\0\r\i\o\5\p\6\n\y\h\2\6\5\3\2\p\m\0\4\0\0\8\m\8\t\1\1\a\i\n\h\k\i\l\h\q\s\c\o\o\j\6\j\u\g\0\r\e\h\e\z\1\p\c\0\j\s\n\3\8\c\i\j\r\g\e\8\u\h\t\x\a\k\r\n\0\z\s\p\b\o\6\k\e\u\q\q\l\4\o\a\j\1\t\3\j\y\c\o\r\x\l\u\n\1\5\9\f\j\s\r\p\8\z\9\e\0\b\k\7\5\w\b\h\s\j\9\y\6\4\y\r\w\o\i\t\j\t\7\8\p\z\r\e\1\p\w\f\q\a\a\8\w\4\t\j\r\f\d\2\a\y\z\k\0\x\f\d\y\z\4\7\v\s\g\9\w\b\y\k\t\5\d\p\x\2\o\9\9\u\p\v\f\h\7\3\4\y\i\i\o\p\u\7\v\g\h\1\b\m\d\m\w\r\n\n\z\v\2\k\g\t\y\w\3\v\t\d\4\5\h\7\c\0\t\8\7\t\f\d\3\9\m\7\4\u\o\k\s\l\k\l\o\5\9\o\t\m\u\z\c\x\l\q\j\t\r\y\y\k\0\2\z\o\r\1\3\q\m\9\1\x\x\a\0\h\5\5\9\p\n\7\n\7\o\g\t\6\p\t\j\9\5\9\2\6\2\o\z\q\f\5\l\o\6\6\5\c\x\o\e\o\1\e\p\p\m\k\4\g\p\s\b\t\n\x\b\t\8\m\6\7\i\r\o\g\n\y\r\m\c\l\c\3\h\g\d\z\i\4\2\v\d\i\a\g\x\q\q\f\2\l\n\4\7\5\m\3\y\m\j\k\t\x\y\u\o\4\y\u\p\l\z\e\9\f\g\n\o\0\4\c\n\1\7\d\b\z\b\h\d\b\c\v\2\k\g\g\c\1\z\4\x\r\k\g\w\f\p\j\0\5\w\h\h\d\g\k\o\d\q\t\r\e\9\h\9\0\9\a\2\4\c\8\i\f\t\m\5\f\z\n\5\2\l\d\y\d\h\h\x\z\0\f\m\t\6\5\9\6\1\t\e\g\u\p\6\w\z\x\a\0\c\r\4\a\6\w\u\f\y\2\b\7\5\l\w\r\x\x\a\h\r\y\6\z\x\b\v\m\a\h\w\d\2\q\u\k\z\f\z\a\u\v\7\5\e\5\k\9\1\b\a\z\5\r\c\s\d\4\y\g\0\5\c\i\9\z\e\v\b\q\s\o\o\d\c\x\a\4\6\v\6\3\u\a\h\y\x\v\r\v\4\c\g\z\f\2\e\w\s\4\v\j\g\q\d\r\a\3\p\p\d\z\m\7\e\5\9\t\8\o\q\5\y\j\r\j\z\e\r\c\h\z\d\3\z\2\6\0\0\p\w\w\d\q\y\k\h\4\a\0\u\4\0\b\w\4\m\3\s\j\h\5\w\p\o\1\8\j\i\9\x\j\m\k\g\l\e\0\9\y\v\l\y\9\k\n\v\h\j\o\4\i\i\k\j\1\o\l\o\n\s\k\h\t\h\l\1\l\y\c\u\b\5\h\i\w\g\9\1\z\y\e\5\i\a\2\8\6\6\t\p\m\1\6\i\j\i\e\n\d\g\5\d\n\k\e\5\s\w\c\k\x\8\j\v\f\l\y\o\5\i\9\b\9\o\v\c\o\e\z\5\4\b\k\6\u\z\q\1\q\c\9\y\t\l\n\d\u\8\n\f\z\9\e\8\q\z\i\k\f\7\d\7\p\o\c\0\s\j\2\h\f\f\2\p\c\a\q\8\v\6\5\i\k\v\a\i\k\o\i\2\k\x\0\e\5\c\e\q\u\y\b\b\y\m\e\p\5\7\n\8\w\0\5\z\v\2\s\a\u\8\n\5\5\t\o\6\r\i\l\n\x\9\y\x\g\8\n\z\9\n\g ]] 00:07:42.202 ************************************ 00:07:42.202 END TEST dd_rw_offset 00:07:42.202 ************************************ 00:07:42.202 00:07:42.202 real 0m0.967s 00:07:42.202 user 0m0.672s 00:07:42.202 sys 0m0.363s 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.202 22:38:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.202 [2024-12-07 22:38:56.819395] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:42.202 [2024-12-07 22:38:56.819500] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72107 ] 00:07:42.202 { 00:07:42.202 "subsystems": [ 00:07:42.202 { 00:07:42.202 "subsystem": "bdev", 00:07:42.202 "config": [ 00:07:42.202 { 00:07:42.202 "params": { 00:07:42.202 "trtype": "pcie", 00:07:42.202 "traddr": "0000:00:10.0", 00:07:42.202 "name": "Nvme0" 00:07:42.202 }, 00:07:42.202 "method": "bdev_nvme_attach_controller" 00:07:42.202 }, 00:07:42.202 { 00:07:42.202 "method": "bdev_wait_for_examine" 00:07:42.202 } 00:07:42.202 ] 00:07:42.202 } 00:07:42.202 ] 00:07:42.202 } 00:07:42.202 [2024-12-07 22:38:56.954559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.462 [2024-12-07 22:38:56.986118] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.462 [2024-12-07 22:38:57.013883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.462  [2024-12-07T22:38:57.228Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:42.462 00:07:42.722 22:38:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.722 00:07:42.722 real 0m13.991s 00:07:42.722 user 0m10.048s 00:07:42.722 sys 0m4.484s 00:07:42.722 22:38:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.722 ************************************ 00:07:42.722 END TEST spdk_dd_basic_rw 00:07:42.722 22:38:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.722 ************************************ 00:07:42.722 22:38:57 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:42.722 22:38:57 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.722 22:38:57 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.722 22:38:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:42.722 ************************************ 00:07:42.722 START TEST spdk_dd_posix 00:07:42.722 ************************************ 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:42.722 * Looking for test storage... 00:07:42.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:42.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.722 --rc genhtml_branch_coverage=1 00:07:42.722 --rc genhtml_function_coverage=1 00:07:42.722 --rc genhtml_legend=1 00:07:42.722 --rc geninfo_all_blocks=1 00:07:42.722 --rc geninfo_unexecuted_blocks=1 00:07:42.722 00:07:42.722 ' 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:42.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.722 --rc genhtml_branch_coverage=1 00:07:42.722 --rc genhtml_function_coverage=1 00:07:42.722 --rc genhtml_legend=1 00:07:42.722 --rc geninfo_all_blocks=1 00:07:42.722 --rc geninfo_unexecuted_blocks=1 00:07:42.722 00:07:42.722 ' 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:42.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.722 --rc genhtml_branch_coverage=1 00:07:42.722 --rc genhtml_function_coverage=1 00:07:42.722 --rc genhtml_legend=1 00:07:42.722 --rc geninfo_all_blocks=1 00:07:42.722 --rc geninfo_unexecuted_blocks=1 00:07:42.722 00:07:42.722 ' 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:42.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.722 --rc genhtml_branch_coverage=1 00:07:42.722 --rc genhtml_function_coverage=1 00:07:42.722 --rc genhtml_legend=1 00:07:42.722 --rc geninfo_all_blocks=1 00:07:42.722 --rc geninfo_unexecuted_blocks=1 00:07:42.722 00:07:42.722 ' 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.722 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:42.723 * First test run, liburing in use 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:42.723 ************************************ 00:07:42.723 START TEST dd_flag_append 00:07:42.723 ************************************ 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:42.723 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=vd4fx55hntesogxxgx9n8ickhsi7rt5m 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=tuq8fbixmjs2ftyufq9l7ae9wkjqahcs 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s vd4fx55hntesogxxgx9n8ickhsi7rt5m 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s tuq8fbixmjs2ftyufq9l7ae9wkjqahcs 00:07:42.983 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:42.983 [2024-12-07 22:38:57.543373] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:42.983 [2024-12-07 22:38:57.543479] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72179 ] 00:07:42.983 [2024-12-07 22:38:57.679150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.983 [2024-12-07 22:38:57.709402] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.983 [2024-12-07 22:38:57.735021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.242  [2024-12-07T22:38:58.008Z] Copying: 32/32 [B] (average 31 kBps) 00:07:43.242 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ tuq8fbixmjs2ftyufq9l7ae9wkjqahcsvd4fx55hntesogxxgx9n8ickhsi7rt5m == \t\u\q\8\f\b\i\x\m\j\s\2\f\t\y\u\f\q\9\l\7\a\e\9\w\k\j\q\a\h\c\s\v\d\4\f\x\5\5\h\n\t\e\s\o\g\x\x\g\x\9\n\8\i\c\k\h\s\i\7\r\t\5\m ]] 00:07:43.242 00:07:43.242 real 0m0.396s 00:07:43.242 user 0m0.202s 00:07:43.242 sys 0m0.166s 00:07:43.242 ************************************ 00:07:43.242 END TEST dd_flag_append 00:07:43.242 ************************************ 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:43.242 ************************************ 00:07:43.242 START TEST dd_flag_directory 00:07:43.242 ************************************ 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.242 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.243 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.243 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.243 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.243 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.243 22:38:57 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.243 [2024-12-07 22:38:57.987348] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.243 [2024-12-07 22:38:57.987451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72207 ] 00:07:43.502 [2024-12-07 22:38:58.124239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.502 [2024-12-07 22:38:58.157463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.502 [2024-12-07 22:38:58.186237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.502 [2024-12-07 22:38:58.200294] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:43.502 [2024-12-07 22:38:58.200346] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:43.502 [2024-12-07 22:38:58.200373] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:43.502 [2024-12-07 22:38:58.259632] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.761 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.762 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:43.762 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.762 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:43.762 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:43.762 [2024-12-07 22:38:58.374172] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:43.762 [2024-12-07 22:38:58.374267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72217 ] 00:07:43.762 [2024-12-07 22:38:58.509787] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.021 [2024-12-07 22:38:58.541264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.021 [2024-12-07 22:38:58.566661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.021 [2024-12-07 22:38:58.580570] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:44.021 [2024-12-07 22:38:58.580637] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:44.021 [2024-12-07 22:38:58.580667] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.021 [2024-12-07 22:38:58.634479] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.021 00:07:44.021 real 0m0.765s 00:07:44.021 user 0m0.370s 00:07:44.021 sys 0m0.187s 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.021 ************************************ 00:07:44.021 END TEST dd_flag_directory 00:07:44.021 ************************************ 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:44.021 ************************************ 00:07:44.021 START TEST dd_flag_nofollow 00:07:44.021 ************************************ 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.021 22:38:58 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.281 [2024-12-07 22:38:58.803468] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.281 [2024-12-07 22:38:58.803566] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72240 ] 00:07:44.281 [2024-12-07 22:38:58.943237] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.281 [2024-12-07 22:38:58.983260] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.281 [2024-12-07 22:38:59.014879] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.281 [2024-12-07 22:38:59.031929] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:44.281 [2024-12-07 22:38:59.032000] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:44.281 [2024-12-07 22:38:59.032017] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.541 [2024-12-07 22:38:59.089709] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.541 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:44.541 [2024-12-07 22:38:59.207905] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:44.541 [2024-12-07 22:38:59.207996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72255 ] 00:07:44.803 [2024-12-07 22:38:59.342120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.803 [2024-12-07 22:38:59.372910] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.803 [2024-12-07 22:38:59.398948] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.803 [2024-12-07 22:38:59.413093] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:44.803 [2024-12-07 22:38:59.413162] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:44.803 [2024-12-07 22:38:59.413192] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.803 [2024-12-07 22:38:59.468293] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:44.803 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.062 [2024-12-07 22:38:59.583913] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:45.062 [2024-12-07 22:38:59.584016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72257 ] 00:07:45.062 [2024-12-07 22:38:59.716174] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.062 [2024-12-07 22:38:59.746791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.062 [2024-12-07 22:38:59.772365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.062  [2024-12-07T22:39:00.086Z] Copying: 512/512 [B] (average 500 kBps) 00:07:45.320 00:07:45.320 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ imdl3qyfvsnz9zpouyo94yp2e8yalqjoh3cacjub6x86znuuogqea093vay9dbslkfq5ij58fiqmevz8j96l36rn9ufqayh9orwrj6w5k0ug5w3uvop0dmxd2kyj8n3htwxlygimkectcv9rz6woujcyp2eyjt0d0agxsig6hp7gjergbucsc4pwaxesoat02ack20cqxrct97jkduql09pcc6dl1zypwj8q1cw77wewq76h6vcuk2i3yegwzlfint68e60c82x8unn8vmeuyw04kt75l2mw5tewwx52qb47hp0cc8kyyj9brxq4er212y5t0t19175novmqf02iagkfrb89f80ftwzvokjb4vht2ejvd2mpryssnd7a574c0dh2dj49v3c74gd1mehbwfasca227y0j00n9h4j6f1yeko5vu1forpoq1a8j2qpw57xewb0kwgin22sfbzhcoauwu0gl1kwys70x3ngicygeiierzvmhibfj48mhpr29 == \i\m\d\l\3\q\y\f\v\s\n\z\9\z\p\o\u\y\o\9\4\y\p\2\e\8\y\a\l\q\j\o\h\3\c\a\c\j\u\b\6\x\8\6\z\n\u\u\o\g\q\e\a\0\9\3\v\a\y\9\d\b\s\l\k\f\q\5\i\j\5\8\f\i\q\m\e\v\z\8\j\9\6\l\3\6\r\n\9\u\f\q\a\y\h\9\o\r\w\r\j\6\w\5\k\0\u\g\5\w\3\u\v\o\p\0\d\m\x\d\2\k\y\j\8\n\3\h\t\w\x\l\y\g\i\m\k\e\c\t\c\v\9\r\z\6\w\o\u\j\c\y\p\2\e\y\j\t\0\d\0\a\g\x\s\i\g\6\h\p\7\g\j\e\r\g\b\u\c\s\c\4\p\w\a\x\e\s\o\a\t\0\2\a\c\k\2\0\c\q\x\r\c\t\9\7\j\k\d\u\q\l\0\9\p\c\c\6\d\l\1\z\y\p\w\j\8\q\1\c\w\7\7\w\e\w\q\7\6\h\6\v\c\u\k\2\i\3\y\e\g\w\z\l\f\i\n\t\6\8\e\6\0\c\8\2\x\8\u\n\n\8\v\m\e\u\y\w\0\4\k\t\7\5\l\2\m\w\5\t\e\w\w\x\5\2\q\b\4\7\h\p\0\c\c\8\k\y\y\j\9\b\r\x\q\4\e\r\2\1\2\y\5\t\0\t\1\9\1\7\5\n\o\v\m\q\f\0\2\i\a\g\k\f\r\b\8\9\f\8\0\f\t\w\z\v\o\k\j\b\4\v\h\t\2\e\j\v\d\2\m\p\r\y\s\s\n\d\7\a\5\7\4\c\0\d\h\2\d\j\4\9\v\3\c\7\4\g\d\1\m\e\h\b\w\f\a\s\c\a\2\2\7\y\0\j\0\0\n\9\h\4\j\6\f\1\y\e\k\o\5\v\u\1\f\o\r\p\o\q\1\a\8\j\2\q\p\w\5\7\x\e\w\b\0\k\w\g\i\n\2\2\s\f\b\z\h\c\o\a\u\w\u\0\g\l\1\k\w\y\s\7\0\x\3\n\g\i\c\y\g\e\i\i\e\r\z\v\m\h\i\b\f\j\4\8\m\h\p\r\2\9 ]] 00:07:45.320 00:07:45.320 real 0m1.161s 00:07:45.320 user 0m0.564s 00:07:45.320 sys 0m0.350s 00:07:45.320 ************************************ 00:07:45.320 END TEST dd_flag_nofollow 00:07:45.321 ************************************ 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:45.321 ************************************ 00:07:45.321 START TEST dd_flag_noatime 00:07:45.321 ************************************ 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1733611139 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1733611139 00:07:45.321 22:38:59 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:46.255 22:39:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.514 [2024-12-07 22:39:01.029328] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.514 [2024-12-07 22:39:01.029439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72298 ] 00:07:46.514 [2024-12-07 22:39:01.168862] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.514 [2024-12-07 22:39:01.209596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.514 [2024-12-07 22:39:01.242594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.514  [2024-12-07T22:39:01.539Z] Copying: 512/512 [B] (average 500 kBps) 00:07:46.773 00:07:46.773 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:46.773 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1733611139 )) 00:07:46.773 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.773 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1733611139 )) 00:07:46.773 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.773 [2024-12-07 22:39:01.457144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:46.773 [2024-12-07 22:39:01.457241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72313 ] 00:07:47.032 [2024-12-07 22:39:01.592125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.032 [2024-12-07 22:39:01.623786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.032 [2024-12-07 22:39:01.650189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.032  [2024-12-07T22:39:01.798Z] Copying: 512/512 [B] (average 500 kBps) 00:07:47.032 00:07:47.032 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:47.032 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1733611141 )) 00:07:47.032 00:07:47.032 real 0m1.840s 00:07:47.032 user 0m0.398s 00:07:47.032 sys 0m0.378s 00:07:47.032 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.032 22:39:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:47.032 ************************************ 00:07:47.032 END TEST dd_flag_noatime 00:07:47.032 ************************************ 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:47.291 ************************************ 00:07:47.291 START TEST dd_flags_misc 00:07:47.291 ************************************ 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.291 22:39:01 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:47.291 [2024-12-07 22:39:01.904283] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.291 [2024-12-07 22:39:01.904388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72336 ] 00:07:47.291 [2024-12-07 22:39:02.038150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.551 [2024-12-07 22:39:02.074637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.551 [2024-12-07 22:39:02.101450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.551  [2024-12-07T22:39:02.317Z] Copying: 512/512 [B] (average 500 kBps) 00:07:47.551 00:07:47.551 22:39:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2k7u36b69e0cl6yughvvl9ooxy2cz3bms5hz2i3v3byf7spjnxo78th6v5nc8o3ojciuu2mjqeg4kovgnwjyi0y5jd76kxcmdqsqkocktqbjcezlnflwc7xu2nl6dbbanqu15vamgpaxbfxracnj4pf2644zp5kzzr4kakxn4yzsr64o9b8jesy7j27trpelas2hgztdud13dgbmibrxzotan5atvmpqxkvdjqwrtx9hni9ffzoc08sbczopjmg2op7cvryf4byf1qj8qq53jtrb9dqeflxn56bhoatle3oofllxwi8abaxzwpatzlhmjxlorxj7yj527ton5n2hi9esurg4ryom4y5xpeut8nnbi5glo2xwxsxix2cxg3nv0sypzadjesub2f3vzu3zlekjhymqaxg8zectrqyp62m125a9coky43aifgr9d4htgruaq8gbw1r21xuhmjwwmbns2kyhakbo9ntwewdrprou0rh3w7zt1r8j02ukvjcr == \2\k\7\u\3\6\b\6\9\e\0\c\l\6\y\u\g\h\v\v\l\9\o\o\x\y\2\c\z\3\b\m\s\5\h\z\2\i\3\v\3\b\y\f\7\s\p\j\n\x\o\7\8\t\h\6\v\5\n\c\8\o\3\o\j\c\i\u\u\2\m\j\q\e\g\4\k\o\v\g\n\w\j\y\i\0\y\5\j\d\7\6\k\x\c\m\d\q\s\q\k\o\c\k\t\q\b\j\c\e\z\l\n\f\l\w\c\7\x\u\2\n\l\6\d\b\b\a\n\q\u\1\5\v\a\m\g\p\a\x\b\f\x\r\a\c\n\j\4\p\f\2\6\4\4\z\p\5\k\z\z\r\4\k\a\k\x\n\4\y\z\s\r\6\4\o\9\b\8\j\e\s\y\7\j\2\7\t\r\p\e\l\a\s\2\h\g\z\t\d\u\d\1\3\d\g\b\m\i\b\r\x\z\o\t\a\n\5\a\t\v\m\p\q\x\k\v\d\j\q\w\r\t\x\9\h\n\i\9\f\f\z\o\c\0\8\s\b\c\z\o\p\j\m\g\2\o\p\7\c\v\r\y\f\4\b\y\f\1\q\j\8\q\q\5\3\j\t\r\b\9\d\q\e\f\l\x\n\5\6\b\h\o\a\t\l\e\3\o\o\f\l\l\x\w\i\8\a\b\a\x\z\w\p\a\t\z\l\h\m\j\x\l\o\r\x\j\7\y\j\5\2\7\t\o\n\5\n\2\h\i\9\e\s\u\r\g\4\r\y\o\m\4\y\5\x\p\e\u\t\8\n\n\b\i\5\g\l\o\2\x\w\x\s\x\i\x\2\c\x\g\3\n\v\0\s\y\p\z\a\d\j\e\s\u\b\2\f\3\v\z\u\3\z\l\e\k\j\h\y\m\q\a\x\g\8\z\e\c\t\r\q\y\p\6\2\m\1\2\5\a\9\c\o\k\y\4\3\a\i\f\g\r\9\d\4\h\t\g\r\u\a\q\8\g\b\w\1\r\2\1\x\u\h\m\j\w\w\m\b\n\s\2\k\y\h\a\k\b\o\9\n\t\w\e\w\d\r\p\r\o\u\0\r\h\3\w\7\z\t\1\r\8\j\0\2\u\k\v\j\c\r ]] 00:07:47.551 22:39:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:47.551 22:39:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:47.551 [2024-12-07 22:39:02.299112] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:47.551 [2024-12-07 22:39:02.299215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72351 ] 00:07:47.810 [2024-12-07 22:39:02.434755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.810 [2024-12-07 22:39:02.468325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.810 [2024-12-07 22:39:02.494910] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.810  [2024-12-07T22:39:02.835Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.069 00:07:48.070 22:39:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2k7u36b69e0cl6yughvvl9ooxy2cz3bms5hz2i3v3byf7spjnxo78th6v5nc8o3ojciuu2mjqeg4kovgnwjyi0y5jd76kxcmdqsqkocktqbjcezlnflwc7xu2nl6dbbanqu15vamgpaxbfxracnj4pf2644zp5kzzr4kakxn4yzsr64o9b8jesy7j27trpelas2hgztdud13dgbmibrxzotan5atvmpqxkvdjqwrtx9hni9ffzoc08sbczopjmg2op7cvryf4byf1qj8qq53jtrb9dqeflxn56bhoatle3oofllxwi8abaxzwpatzlhmjxlorxj7yj527ton5n2hi9esurg4ryom4y5xpeut8nnbi5glo2xwxsxix2cxg3nv0sypzadjesub2f3vzu3zlekjhymqaxg8zectrqyp62m125a9coky43aifgr9d4htgruaq8gbw1r21xuhmjwwmbns2kyhakbo9ntwewdrprou0rh3w7zt1r8j02ukvjcr == \2\k\7\u\3\6\b\6\9\e\0\c\l\6\y\u\g\h\v\v\l\9\o\o\x\y\2\c\z\3\b\m\s\5\h\z\2\i\3\v\3\b\y\f\7\s\p\j\n\x\o\7\8\t\h\6\v\5\n\c\8\o\3\o\j\c\i\u\u\2\m\j\q\e\g\4\k\o\v\g\n\w\j\y\i\0\y\5\j\d\7\6\k\x\c\m\d\q\s\q\k\o\c\k\t\q\b\j\c\e\z\l\n\f\l\w\c\7\x\u\2\n\l\6\d\b\b\a\n\q\u\1\5\v\a\m\g\p\a\x\b\f\x\r\a\c\n\j\4\p\f\2\6\4\4\z\p\5\k\z\z\r\4\k\a\k\x\n\4\y\z\s\r\6\4\o\9\b\8\j\e\s\y\7\j\2\7\t\r\p\e\l\a\s\2\h\g\z\t\d\u\d\1\3\d\g\b\m\i\b\r\x\z\o\t\a\n\5\a\t\v\m\p\q\x\k\v\d\j\q\w\r\t\x\9\h\n\i\9\f\f\z\o\c\0\8\s\b\c\z\o\p\j\m\g\2\o\p\7\c\v\r\y\f\4\b\y\f\1\q\j\8\q\q\5\3\j\t\r\b\9\d\q\e\f\l\x\n\5\6\b\h\o\a\t\l\e\3\o\o\f\l\l\x\w\i\8\a\b\a\x\z\w\p\a\t\z\l\h\m\j\x\l\o\r\x\j\7\y\j\5\2\7\t\o\n\5\n\2\h\i\9\e\s\u\r\g\4\r\y\o\m\4\y\5\x\p\e\u\t\8\n\n\b\i\5\g\l\o\2\x\w\x\s\x\i\x\2\c\x\g\3\n\v\0\s\y\p\z\a\d\j\e\s\u\b\2\f\3\v\z\u\3\z\l\e\k\j\h\y\m\q\a\x\g\8\z\e\c\t\r\q\y\p\6\2\m\1\2\5\a\9\c\o\k\y\4\3\a\i\f\g\r\9\d\4\h\t\g\r\u\a\q\8\g\b\w\1\r\2\1\x\u\h\m\j\w\w\m\b\n\s\2\k\y\h\a\k\b\o\9\n\t\w\e\w\d\r\p\r\o\u\0\r\h\3\w\7\z\t\1\r\8\j\0\2\u\k\v\j\c\r ]] 00:07:48.070 22:39:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.070 22:39:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:48.070 [2024-12-07 22:39:02.681449] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.070 [2024-12-07 22:39:02.681554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72355 ] 00:07:48.070 [2024-12-07 22:39:02.816485] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.328 [2024-12-07 22:39:02.848798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.328 [2024-12-07 22:39:02.876796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.328  [2024-12-07T22:39:03.094Z] Copying: 512/512 [B] (average 166 kBps) 00:07:48.328 00:07:48.328 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2k7u36b69e0cl6yughvvl9ooxy2cz3bms5hz2i3v3byf7spjnxo78th6v5nc8o3ojciuu2mjqeg4kovgnwjyi0y5jd76kxcmdqsqkocktqbjcezlnflwc7xu2nl6dbbanqu15vamgpaxbfxracnj4pf2644zp5kzzr4kakxn4yzsr64o9b8jesy7j27trpelas2hgztdud13dgbmibrxzotan5atvmpqxkvdjqwrtx9hni9ffzoc08sbczopjmg2op7cvryf4byf1qj8qq53jtrb9dqeflxn56bhoatle3oofllxwi8abaxzwpatzlhmjxlorxj7yj527ton5n2hi9esurg4ryom4y5xpeut8nnbi5glo2xwxsxix2cxg3nv0sypzadjesub2f3vzu3zlekjhymqaxg8zectrqyp62m125a9coky43aifgr9d4htgruaq8gbw1r21xuhmjwwmbns2kyhakbo9ntwewdrprou0rh3w7zt1r8j02ukvjcr == \2\k\7\u\3\6\b\6\9\e\0\c\l\6\y\u\g\h\v\v\l\9\o\o\x\y\2\c\z\3\b\m\s\5\h\z\2\i\3\v\3\b\y\f\7\s\p\j\n\x\o\7\8\t\h\6\v\5\n\c\8\o\3\o\j\c\i\u\u\2\m\j\q\e\g\4\k\o\v\g\n\w\j\y\i\0\y\5\j\d\7\6\k\x\c\m\d\q\s\q\k\o\c\k\t\q\b\j\c\e\z\l\n\f\l\w\c\7\x\u\2\n\l\6\d\b\b\a\n\q\u\1\5\v\a\m\g\p\a\x\b\f\x\r\a\c\n\j\4\p\f\2\6\4\4\z\p\5\k\z\z\r\4\k\a\k\x\n\4\y\z\s\r\6\4\o\9\b\8\j\e\s\y\7\j\2\7\t\r\p\e\l\a\s\2\h\g\z\t\d\u\d\1\3\d\g\b\m\i\b\r\x\z\o\t\a\n\5\a\t\v\m\p\q\x\k\v\d\j\q\w\r\t\x\9\h\n\i\9\f\f\z\o\c\0\8\s\b\c\z\o\p\j\m\g\2\o\p\7\c\v\r\y\f\4\b\y\f\1\q\j\8\q\q\5\3\j\t\r\b\9\d\q\e\f\l\x\n\5\6\b\h\o\a\t\l\e\3\o\o\f\l\l\x\w\i\8\a\b\a\x\z\w\p\a\t\z\l\h\m\j\x\l\o\r\x\j\7\y\j\5\2\7\t\o\n\5\n\2\h\i\9\e\s\u\r\g\4\r\y\o\m\4\y\5\x\p\e\u\t\8\n\n\b\i\5\g\l\o\2\x\w\x\s\x\i\x\2\c\x\g\3\n\v\0\s\y\p\z\a\d\j\e\s\u\b\2\f\3\v\z\u\3\z\l\e\k\j\h\y\m\q\a\x\g\8\z\e\c\t\r\q\y\p\6\2\m\1\2\5\a\9\c\o\k\y\4\3\a\i\f\g\r\9\d\4\h\t\g\r\u\a\q\8\g\b\w\1\r\2\1\x\u\h\m\j\w\w\m\b\n\s\2\k\y\h\a\k\b\o\9\n\t\w\e\w\d\r\p\r\o\u\0\r\h\3\w\7\z\t\1\r\8\j\0\2\u\k\v\j\c\r ]] 00:07:48.328 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.328 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:48.329 [2024-12-07 22:39:03.064333] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.329 [2024-12-07 22:39:03.064427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72359 ] 00:07:48.587 [2024-12-07 22:39:03.201013] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.587 [2024-12-07 22:39:03.233825] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.587 [2024-12-07 22:39:03.262472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.587  [2024-12-07T22:39:03.613Z] Copying: 512/512 [B] (average 500 kBps) 00:07:48.847 00:07:48.847 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2k7u36b69e0cl6yughvvl9ooxy2cz3bms5hz2i3v3byf7spjnxo78th6v5nc8o3ojciuu2mjqeg4kovgnwjyi0y5jd76kxcmdqsqkocktqbjcezlnflwc7xu2nl6dbbanqu15vamgpaxbfxracnj4pf2644zp5kzzr4kakxn4yzsr64o9b8jesy7j27trpelas2hgztdud13dgbmibrxzotan5atvmpqxkvdjqwrtx9hni9ffzoc08sbczopjmg2op7cvryf4byf1qj8qq53jtrb9dqeflxn56bhoatle3oofllxwi8abaxzwpatzlhmjxlorxj7yj527ton5n2hi9esurg4ryom4y5xpeut8nnbi5glo2xwxsxix2cxg3nv0sypzadjesub2f3vzu3zlekjhymqaxg8zectrqyp62m125a9coky43aifgr9d4htgruaq8gbw1r21xuhmjwwmbns2kyhakbo9ntwewdrprou0rh3w7zt1r8j02ukvjcr == \2\k\7\u\3\6\b\6\9\e\0\c\l\6\y\u\g\h\v\v\l\9\o\o\x\y\2\c\z\3\b\m\s\5\h\z\2\i\3\v\3\b\y\f\7\s\p\j\n\x\o\7\8\t\h\6\v\5\n\c\8\o\3\o\j\c\i\u\u\2\m\j\q\e\g\4\k\o\v\g\n\w\j\y\i\0\y\5\j\d\7\6\k\x\c\m\d\q\s\q\k\o\c\k\t\q\b\j\c\e\z\l\n\f\l\w\c\7\x\u\2\n\l\6\d\b\b\a\n\q\u\1\5\v\a\m\g\p\a\x\b\f\x\r\a\c\n\j\4\p\f\2\6\4\4\z\p\5\k\z\z\r\4\k\a\k\x\n\4\y\z\s\r\6\4\o\9\b\8\j\e\s\y\7\j\2\7\t\r\p\e\l\a\s\2\h\g\z\t\d\u\d\1\3\d\g\b\m\i\b\r\x\z\o\t\a\n\5\a\t\v\m\p\q\x\k\v\d\j\q\w\r\t\x\9\h\n\i\9\f\f\z\o\c\0\8\s\b\c\z\o\p\j\m\g\2\o\p\7\c\v\r\y\f\4\b\y\f\1\q\j\8\q\q\5\3\j\t\r\b\9\d\q\e\f\l\x\n\5\6\b\h\o\a\t\l\e\3\o\o\f\l\l\x\w\i\8\a\b\a\x\z\w\p\a\t\z\l\h\m\j\x\l\o\r\x\j\7\y\j\5\2\7\t\o\n\5\n\2\h\i\9\e\s\u\r\g\4\r\y\o\m\4\y\5\x\p\e\u\t\8\n\n\b\i\5\g\l\o\2\x\w\x\s\x\i\x\2\c\x\g\3\n\v\0\s\y\p\z\a\d\j\e\s\u\b\2\f\3\v\z\u\3\z\l\e\k\j\h\y\m\q\a\x\g\8\z\e\c\t\r\q\y\p\6\2\m\1\2\5\a\9\c\o\k\y\4\3\a\i\f\g\r\9\d\4\h\t\g\r\u\a\q\8\g\b\w\1\r\2\1\x\u\h\m\j\w\w\m\b\n\s\2\k\y\h\a\k\b\o\9\n\t\w\e\w\d\r\p\r\o\u\0\r\h\3\w\7\z\t\1\r\8\j\0\2\u\k\v\j\c\r ]] 00:07:48.847 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:48.847 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:48.847 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:48.847 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:48.847 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:48.847 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:48.847 [2024-12-07 22:39:03.461435] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:48.847 [2024-12-07 22:39:03.461711] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72374 ] 00:07:48.847 [2024-12-07 22:39:03.596735] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.107 [2024-12-07 22:39:03.630483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.107 [2024-12-07 22:39:03.656925] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.107  [2024-12-07T22:39:03.873Z] Copying: 512/512 [B] (average 500 kBps) 00:07:49.107 00:07:49.107 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 43as9u8bt4ge54ro0bskwsaplpn38458ben8zo2xnxe5bevf8voibz5h4jd82u4id1v7stzd66nb5ykx596x82w2263yusw48zue2t7b9e0x9b7pz3l6xrinxoaalcn26a5ompwtmbjhjmitbeqdfcwxj5hs7wiweqiwhrqlz18lxrzau428rq3tngudgxg7xs0lya0iggg46nup643s4kpplkdkwowa9w0uu22spgvk3fbg8qn1pz9xsisn50ouh32uz5pgg1tpznsol3ssypt46pw8hwjq5tfpndfm6rzb5eiy6jlc2fwlxtsjjk8qq5udolu53y5457evafmgh201fwj62es8zy3ky00abg711q2gg671ugc93j2977do7hpcg4wfii6a33ylqmumw4zpo0mkgm7d492bd9240yw08xcowkh4rx9mrw88gbd9coaayttkv2k5g28gmu2mren4kgi8rcopljpno6k1nm52apmqhcpkne11he8d007b == \4\3\a\s\9\u\8\b\t\4\g\e\5\4\r\o\0\b\s\k\w\s\a\p\l\p\n\3\8\4\5\8\b\e\n\8\z\o\2\x\n\x\e\5\b\e\v\f\8\v\o\i\b\z\5\h\4\j\d\8\2\u\4\i\d\1\v\7\s\t\z\d\6\6\n\b\5\y\k\x\5\9\6\x\8\2\w\2\2\6\3\y\u\s\w\4\8\z\u\e\2\t\7\b\9\e\0\x\9\b\7\p\z\3\l\6\x\r\i\n\x\o\a\a\l\c\n\2\6\a\5\o\m\p\w\t\m\b\j\h\j\m\i\t\b\e\q\d\f\c\w\x\j\5\h\s\7\w\i\w\e\q\i\w\h\r\q\l\z\1\8\l\x\r\z\a\u\4\2\8\r\q\3\t\n\g\u\d\g\x\g\7\x\s\0\l\y\a\0\i\g\g\g\4\6\n\u\p\6\4\3\s\4\k\p\p\l\k\d\k\w\o\w\a\9\w\0\u\u\2\2\s\p\g\v\k\3\f\b\g\8\q\n\1\p\z\9\x\s\i\s\n\5\0\o\u\h\3\2\u\z\5\p\g\g\1\t\p\z\n\s\o\l\3\s\s\y\p\t\4\6\p\w\8\h\w\j\q\5\t\f\p\n\d\f\m\6\r\z\b\5\e\i\y\6\j\l\c\2\f\w\l\x\t\s\j\j\k\8\q\q\5\u\d\o\l\u\5\3\y\5\4\5\7\e\v\a\f\m\g\h\2\0\1\f\w\j\6\2\e\s\8\z\y\3\k\y\0\0\a\b\g\7\1\1\q\2\g\g\6\7\1\u\g\c\9\3\j\2\9\7\7\d\o\7\h\p\c\g\4\w\f\i\i\6\a\3\3\y\l\q\m\u\m\w\4\z\p\o\0\m\k\g\m\7\d\4\9\2\b\d\9\2\4\0\y\w\0\8\x\c\o\w\k\h\4\r\x\9\m\r\w\8\8\g\b\d\9\c\o\a\a\y\t\t\k\v\2\k\5\g\2\8\g\m\u\2\m\r\e\n\4\k\g\i\8\r\c\o\p\l\j\p\n\o\6\k\1\n\m\5\2\a\p\m\q\h\c\p\k\n\e\1\1\h\e\8\d\0\0\7\b ]] 00:07:49.107 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:49.107 22:39:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:49.107 [2024-12-07 22:39:03.844216] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:49.107 [2024-12-07 22:39:03.844510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72378 ] 00:07:49.366 [2024-12-07 22:39:03.976097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.366 [2024-12-07 22:39:04.008257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.366 [2024-12-07 22:39:04.034092] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.366  [2024-12-07T22:39:04.391Z] Copying: 512/512 [B] (average 500 kBps) 00:07:49.625 00:07:49.625 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 43as9u8bt4ge54ro0bskwsaplpn38458ben8zo2xnxe5bevf8voibz5h4jd82u4id1v7stzd66nb5ykx596x82w2263yusw48zue2t7b9e0x9b7pz3l6xrinxoaalcn26a5ompwtmbjhjmitbeqdfcwxj5hs7wiweqiwhrqlz18lxrzau428rq3tngudgxg7xs0lya0iggg46nup643s4kpplkdkwowa9w0uu22spgvk3fbg8qn1pz9xsisn50ouh32uz5pgg1tpznsol3ssypt46pw8hwjq5tfpndfm6rzb5eiy6jlc2fwlxtsjjk8qq5udolu53y5457evafmgh201fwj62es8zy3ky00abg711q2gg671ugc93j2977do7hpcg4wfii6a33ylqmumw4zpo0mkgm7d492bd9240yw08xcowkh4rx9mrw88gbd9coaayttkv2k5g28gmu2mren4kgi8rcopljpno6k1nm52apmqhcpkne11he8d007b == \4\3\a\s\9\u\8\b\t\4\g\e\5\4\r\o\0\b\s\k\w\s\a\p\l\p\n\3\8\4\5\8\b\e\n\8\z\o\2\x\n\x\e\5\b\e\v\f\8\v\o\i\b\z\5\h\4\j\d\8\2\u\4\i\d\1\v\7\s\t\z\d\6\6\n\b\5\y\k\x\5\9\6\x\8\2\w\2\2\6\3\y\u\s\w\4\8\z\u\e\2\t\7\b\9\e\0\x\9\b\7\p\z\3\l\6\x\r\i\n\x\o\a\a\l\c\n\2\6\a\5\o\m\p\w\t\m\b\j\h\j\m\i\t\b\e\q\d\f\c\w\x\j\5\h\s\7\w\i\w\e\q\i\w\h\r\q\l\z\1\8\l\x\r\z\a\u\4\2\8\r\q\3\t\n\g\u\d\g\x\g\7\x\s\0\l\y\a\0\i\g\g\g\4\6\n\u\p\6\4\3\s\4\k\p\p\l\k\d\k\w\o\w\a\9\w\0\u\u\2\2\s\p\g\v\k\3\f\b\g\8\q\n\1\p\z\9\x\s\i\s\n\5\0\o\u\h\3\2\u\z\5\p\g\g\1\t\p\z\n\s\o\l\3\s\s\y\p\t\4\6\p\w\8\h\w\j\q\5\t\f\p\n\d\f\m\6\r\z\b\5\e\i\y\6\j\l\c\2\f\w\l\x\t\s\j\j\k\8\q\q\5\u\d\o\l\u\5\3\y\5\4\5\7\e\v\a\f\m\g\h\2\0\1\f\w\j\6\2\e\s\8\z\y\3\k\y\0\0\a\b\g\7\1\1\q\2\g\g\6\7\1\u\g\c\9\3\j\2\9\7\7\d\o\7\h\p\c\g\4\w\f\i\i\6\a\3\3\y\l\q\m\u\m\w\4\z\p\o\0\m\k\g\m\7\d\4\9\2\b\d\9\2\4\0\y\w\0\8\x\c\o\w\k\h\4\r\x\9\m\r\w\8\8\g\b\d\9\c\o\a\a\y\t\t\k\v\2\k\5\g\2\8\g\m\u\2\m\r\e\n\4\k\g\i\8\r\c\o\p\l\j\p\n\o\6\k\1\n\m\5\2\a\p\m\q\h\c\p\k\n\e\1\1\h\e\8\d\0\0\7\b ]] 00:07:49.625 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:49.625 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:49.625 [2024-12-07 22:39:04.226182] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:49.625 [2024-12-07 22:39:04.226417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72387 ] 00:07:49.625 [2024-12-07 22:39:04.361726] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.883 [2024-12-07 22:39:04.399686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.883 [2024-12-07 22:39:04.426148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.883  [2024-12-07T22:39:04.649Z] Copying: 512/512 [B] (average 250 kBps) 00:07:49.883 00:07:49.883 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 43as9u8bt4ge54ro0bskwsaplpn38458ben8zo2xnxe5bevf8voibz5h4jd82u4id1v7stzd66nb5ykx596x82w2263yusw48zue2t7b9e0x9b7pz3l6xrinxoaalcn26a5ompwtmbjhjmitbeqdfcwxj5hs7wiweqiwhrqlz18lxrzau428rq3tngudgxg7xs0lya0iggg46nup643s4kpplkdkwowa9w0uu22spgvk3fbg8qn1pz9xsisn50ouh32uz5pgg1tpznsol3ssypt46pw8hwjq5tfpndfm6rzb5eiy6jlc2fwlxtsjjk8qq5udolu53y5457evafmgh201fwj62es8zy3ky00abg711q2gg671ugc93j2977do7hpcg4wfii6a33ylqmumw4zpo0mkgm7d492bd9240yw08xcowkh4rx9mrw88gbd9coaayttkv2k5g28gmu2mren4kgi8rcopljpno6k1nm52apmqhcpkne11he8d007b == \4\3\a\s\9\u\8\b\t\4\g\e\5\4\r\o\0\b\s\k\w\s\a\p\l\p\n\3\8\4\5\8\b\e\n\8\z\o\2\x\n\x\e\5\b\e\v\f\8\v\o\i\b\z\5\h\4\j\d\8\2\u\4\i\d\1\v\7\s\t\z\d\6\6\n\b\5\y\k\x\5\9\6\x\8\2\w\2\2\6\3\y\u\s\w\4\8\z\u\e\2\t\7\b\9\e\0\x\9\b\7\p\z\3\l\6\x\r\i\n\x\o\a\a\l\c\n\2\6\a\5\o\m\p\w\t\m\b\j\h\j\m\i\t\b\e\q\d\f\c\w\x\j\5\h\s\7\w\i\w\e\q\i\w\h\r\q\l\z\1\8\l\x\r\z\a\u\4\2\8\r\q\3\t\n\g\u\d\g\x\g\7\x\s\0\l\y\a\0\i\g\g\g\4\6\n\u\p\6\4\3\s\4\k\p\p\l\k\d\k\w\o\w\a\9\w\0\u\u\2\2\s\p\g\v\k\3\f\b\g\8\q\n\1\p\z\9\x\s\i\s\n\5\0\o\u\h\3\2\u\z\5\p\g\g\1\t\p\z\n\s\o\l\3\s\s\y\p\t\4\6\p\w\8\h\w\j\q\5\t\f\p\n\d\f\m\6\r\z\b\5\e\i\y\6\j\l\c\2\f\w\l\x\t\s\j\j\k\8\q\q\5\u\d\o\l\u\5\3\y\5\4\5\7\e\v\a\f\m\g\h\2\0\1\f\w\j\6\2\e\s\8\z\y\3\k\y\0\0\a\b\g\7\1\1\q\2\g\g\6\7\1\u\g\c\9\3\j\2\9\7\7\d\o\7\h\p\c\g\4\w\f\i\i\6\a\3\3\y\l\q\m\u\m\w\4\z\p\o\0\m\k\g\m\7\d\4\9\2\b\d\9\2\4\0\y\w\0\8\x\c\o\w\k\h\4\r\x\9\m\r\w\8\8\g\b\d\9\c\o\a\a\y\t\t\k\v\2\k\5\g\2\8\g\m\u\2\m\r\e\n\4\k\g\i\8\r\c\o\p\l\j\p\n\o\6\k\1\n\m\5\2\a\p\m\q\h\c\p\k\n\e\1\1\h\e\8\d\0\0\7\b ]] 00:07:49.883 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:49.883 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:49.883 [2024-12-07 22:39:04.614084] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:49.883 [2024-12-07 22:39:04.614175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72397 ] 00:07:50.140 [2024-12-07 22:39:04.749612] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.140 [2024-12-07 22:39:04.780027] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.140 [2024-12-07 22:39:04.805672] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.140  [2024-12-07T22:39:05.164Z] Copying: 512/512 [B] (average 500 kBps) 00:07:50.398 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 43as9u8bt4ge54ro0bskwsaplpn38458ben8zo2xnxe5bevf8voibz5h4jd82u4id1v7stzd66nb5ykx596x82w2263yusw48zue2t7b9e0x9b7pz3l6xrinxoaalcn26a5ompwtmbjhjmitbeqdfcwxj5hs7wiweqiwhrqlz18lxrzau428rq3tngudgxg7xs0lya0iggg46nup643s4kpplkdkwowa9w0uu22spgvk3fbg8qn1pz9xsisn50ouh32uz5pgg1tpznsol3ssypt46pw8hwjq5tfpndfm6rzb5eiy6jlc2fwlxtsjjk8qq5udolu53y5457evafmgh201fwj62es8zy3ky00abg711q2gg671ugc93j2977do7hpcg4wfii6a33ylqmumw4zpo0mkgm7d492bd9240yw08xcowkh4rx9mrw88gbd9coaayttkv2k5g28gmu2mren4kgi8rcopljpno6k1nm52apmqhcpkne11he8d007b == \4\3\a\s\9\u\8\b\t\4\g\e\5\4\r\o\0\b\s\k\w\s\a\p\l\p\n\3\8\4\5\8\b\e\n\8\z\o\2\x\n\x\e\5\b\e\v\f\8\v\o\i\b\z\5\h\4\j\d\8\2\u\4\i\d\1\v\7\s\t\z\d\6\6\n\b\5\y\k\x\5\9\6\x\8\2\w\2\2\6\3\y\u\s\w\4\8\z\u\e\2\t\7\b\9\e\0\x\9\b\7\p\z\3\l\6\x\r\i\n\x\o\a\a\l\c\n\2\6\a\5\o\m\p\w\t\m\b\j\h\j\m\i\t\b\e\q\d\f\c\w\x\j\5\h\s\7\w\i\w\e\q\i\w\h\r\q\l\z\1\8\l\x\r\z\a\u\4\2\8\r\q\3\t\n\g\u\d\g\x\g\7\x\s\0\l\y\a\0\i\g\g\g\4\6\n\u\p\6\4\3\s\4\k\p\p\l\k\d\k\w\o\w\a\9\w\0\u\u\2\2\s\p\g\v\k\3\f\b\g\8\q\n\1\p\z\9\x\s\i\s\n\5\0\o\u\h\3\2\u\z\5\p\g\g\1\t\p\z\n\s\o\l\3\s\s\y\p\t\4\6\p\w\8\h\w\j\q\5\t\f\p\n\d\f\m\6\r\z\b\5\e\i\y\6\j\l\c\2\f\w\l\x\t\s\j\j\k\8\q\q\5\u\d\o\l\u\5\3\y\5\4\5\7\e\v\a\f\m\g\h\2\0\1\f\w\j\6\2\e\s\8\z\y\3\k\y\0\0\a\b\g\7\1\1\q\2\g\g\6\7\1\u\g\c\9\3\j\2\9\7\7\d\o\7\h\p\c\g\4\w\f\i\i\6\a\3\3\y\l\q\m\u\m\w\4\z\p\o\0\m\k\g\m\7\d\4\9\2\b\d\9\2\4\0\y\w\0\8\x\c\o\w\k\h\4\r\x\9\m\r\w\8\8\g\b\d\9\c\o\a\a\y\t\t\k\v\2\k\5\g\2\8\g\m\u\2\m\r\e\n\4\k\g\i\8\r\c\o\p\l\j\p\n\o\6\k\1\n\m\5\2\a\p\m\q\h\c\p\k\n\e\1\1\h\e\8\d\0\0\7\b ]] 00:07:50.398 00:07:50.398 real 0m3.093s 00:07:50.398 user 0m1.520s 00:07:50.398 sys 0m1.327s 00:07:50.398 ************************************ 00:07:50.398 END TEST dd_flags_misc 00:07:50.398 ************************************ 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:50.398 * Second test run, disabling liburing, forcing AIO 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.398 ************************************ 00:07:50.398 START TEST dd_flag_append_forced_aio 00:07:50.398 ************************************ 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=p9a6zdep45nlswv98gpch5efkih3gemm 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=0qnjgz28emngkv6edz8iaj9ww3dcjrqm 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s p9a6zdep45nlswv98gpch5efkih3gemm 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 0qnjgz28emngkv6edz8iaj9ww3dcjrqm 00:07:50.398 22:39:04 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:50.398 [2024-12-07 22:39:05.050988] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.398 [2024-12-07 22:39:05.051102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72420 ] 00:07:50.656 [2024-12-07 22:39:05.188544] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.656 [2024-12-07 22:39:05.223823] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.656 [2024-12-07 22:39:05.249835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.656  [2024-12-07T22:39:05.422Z] Copying: 32/32 [B] (average 31 kBps) 00:07:50.656 00:07:50.656 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 0qnjgz28emngkv6edz8iaj9ww3dcjrqmp9a6zdep45nlswv98gpch5efkih3gemm == \0\q\n\j\g\z\2\8\e\m\n\g\k\v\6\e\d\z\8\i\a\j\9\w\w\3\d\c\j\r\q\m\p\9\a\6\z\d\e\p\4\5\n\l\s\w\v\9\8\g\p\c\h\5\e\f\k\i\h\3\g\e\m\m ]] 00:07:50.656 00:07:50.656 real 0m0.411s 00:07:50.656 user 0m0.202s 00:07:50.656 sys 0m0.090s 00:07:50.656 ************************************ 00:07:50.656 END TEST dd_flag_append_forced_aio 00:07:50.656 ************************************ 00:07:50.656 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.656 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:50.914 ************************************ 00:07:50.914 START TEST dd_flag_directory_forced_aio 00:07:50.914 ************************************ 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.914 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.914 [2024-12-07 22:39:05.507677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:50.914 [2024-12-07 22:39:05.507775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72452 ] 00:07:50.914 [2024-12-07 22:39:05.642481] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.171 [2024-12-07 22:39:05.680493] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.171 [2024-12-07 22:39:05.706720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.171 [2024-12-07 22:39:05.721330] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.171 [2024-12-07 22:39:05.721378] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.171 [2024-12-07 22:39:05.721406] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.171 [2024-12-07 22:39:05.776161] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.171 22:39:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:51.171 [2024-12-07 22:39:05.896003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:51.171 [2024-12-07 22:39:05.896096] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72456 ] 00:07:51.429 [2024-12-07 22:39:06.030077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.430 [2024-12-07 22:39:06.060507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.430 [2024-12-07 22:39:06.086193] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.430 [2024-12-07 22:39:06.100266] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.430 [2024-12-07 22:39:06.100328] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:51.430 [2024-12-07 22:39:06.100341] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.430 [2024-12-07 22:39:06.154150] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.688 00:07:51.688 real 0m0.766s 00:07:51.688 user 0m0.364s 00:07:51.688 sys 0m0.195s 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.688 ************************************ 00:07:51.688 END TEST dd_flag_directory_forced_aio 00:07:51.688 ************************************ 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.688 ************************************ 00:07:51.688 START TEST dd_flag_nofollow_forced_aio 00:07:51.688 ************************************ 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.688 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.689 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.689 [2024-12-07 22:39:06.332854] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:51.689 [2024-12-07 22:39:06.332974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72484 ] 00:07:51.947 [2024-12-07 22:39:06.470229] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.947 [2024-12-07 22:39:06.501066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.947 [2024-12-07 22:39:06.527097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.947 [2024-12-07 22:39:06.541296] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:51.947 [2024-12-07 22:39:06.541357] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:51.947 [2024-12-07 22:39:06.541370] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.947 [2024-12-07 22:39:06.600841] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.947 22:39:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:52.205 [2024-12-07 22:39:06.717397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:52.205 [2024-12-07 22:39:06.717491] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72494 ] 00:07:52.205 [2024-12-07 22:39:06.854697] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.205 [2024-12-07 22:39:06.886102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.205 [2024-12-07 22:39:06.912148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.205 [2024-12-07 22:39:06.926432] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.205 [2024-12-07 22:39:06.926493] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:52.205 [2024-12-07 22:39:06.926506] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.463 [2024-12-07 22:39:06.982199] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:52.463 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.463 [2024-12-07 22:39:07.099490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:52.463 [2024-12-07 22:39:07.099580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72496 ] 00:07:52.721 [2024-12-07 22:39:07.231419] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.721 [2024-12-07 22:39:07.266656] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.721 [2024-12-07 22:39:07.297501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.721  [2024-12-07T22:39:07.487Z] Copying: 512/512 [B] (average 500 kBps) 00:07:52.721 00:07:52.721 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ dbenlx9d4bg0makqlsjseg8azylzcc6d74kk6px6k2qndsjt230bxdha6dkjl1mdrz27qnu0d4aysoebm4y0pkhdakwao6k7rdt306114884ufwhhn0ciylapvudyg1dda418p6fc729gyu7ybglgbpzmadtwof48q3k8fbd2z0wh8w77wo546n0m2gd0ln13wezwqlc1s0ddy9jqudik3ogal0qvupdjwdrguwx1g6atrp961v9emklz885gfy6et5rsubuigby0rhb9ldlecueinbp4az2v8qj7g1m9q00rngug50eak4tyavktgx78etpqspiwddjeaj7abfrybwupq1708vc5hcgz8hydx41cvjne646za2119k53gpm4fea6ap2p5u20z36ap1l3dvu31x42kdjb19vc6rvtwhyparb0qpzgiwmuobofhdtg8z8ds9veschkff2nbmbs901j0kw6hhd0qznnb4ea6rn2j4jcndu02jc5m6il9sr == \d\b\e\n\l\x\9\d\4\b\g\0\m\a\k\q\l\s\j\s\e\g\8\a\z\y\l\z\c\c\6\d\7\4\k\k\6\p\x\6\k\2\q\n\d\s\j\t\2\3\0\b\x\d\h\a\6\d\k\j\l\1\m\d\r\z\2\7\q\n\u\0\d\4\a\y\s\o\e\b\m\4\y\0\p\k\h\d\a\k\w\a\o\6\k\7\r\d\t\3\0\6\1\1\4\8\8\4\u\f\w\h\h\n\0\c\i\y\l\a\p\v\u\d\y\g\1\d\d\a\4\1\8\p\6\f\c\7\2\9\g\y\u\7\y\b\g\l\g\b\p\z\m\a\d\t\w\o\f\4\8\q\3\k\8\f\b\d\2\z\0\w\h\8\w\7\7\w\o\5\4\6\n\0\m\2\g\d\0\l\n\1\3\w\e\z\w\q\l\c\1\s\0\d\d\y\9\j\q\u\d\i\k\3\o\g\a\l\0\q\v\u\p\d\j\w\d\r\g\u\w\x\1\g\6\a\t\r\p\9\6\1\v\9\e\m\k\l\z\8\8\5\g\f\y\6\e\t\5\r\s\u\b\u\i\g\b\y\0\r\h\b\9\l\d\l\e\c\u\e\i\n\b\p\4\a\z\2\v\8\q\j\7\g\1\m\9\q\0\0\r\n\g\u\g\5\0\e\a\k\4\t\y\a\v\k\t\g\x\7\8\e\t\p\q\s\p\i\w\d\d\j\e\a\j\7\a\b\f\r\y\b\w\u\p\q\1\7\0\8\v\c\5\h\c\g\z\8\h\y\d\x\4\1\c\v\j\n\e\6\4\6\z\a\2\1\1\9\k\5\3\g\p\m\4\f\e\a\6\a\p\2\p\5\u\2\0\z\3\6\a\p\1\l\3\d\v\u\3\1\x\4\2\k\d\j\b\1\9\v\c\6\r\v\t\w\h\y\p\a\r\b\0\q\p\z\g\i\w\m\u\o\b\o\f\h\d\t\g\8\z\8\d\s\9\v\e\s\c\h\k\f\f\2\n\b\m\b\s\9\0\1\j\0\k\w\6\h\h\d\0\q\z\n\n\b\4\e\a\6\r\n\2\j\4\j\c\n\d\u\0\2\j\c\5\m\6\i\l\9\s\r ]] 00:07:52.721 00:07:52.721 real 0m1.199s 00:07:52.721 user 0m0.578s 00:07:52.721 sys 0m0.294s 00:07:52.721 ************************************ 00:07:52.721 END TEST dd_flag_nofollow_forced_aio 00:07:52.721 ************************************ 00:07:52.721 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.721 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:52.978 ************************************ 00:07:52.978 START TEST dd_flag_noatime_forced_aio 00:07:52.978 ************************************ 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1733611147 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1733611147 00:07:52.978 22:39:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:07:53.912 22:39:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.912 [2024-12-07 22:39:08.588381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:53.912 [2024-12-07 22:39:08.588487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72542 ] 00:07:54.170 [2024-12-07 22:39:08.728858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.170 [2024-12-07 22:39:08.769273] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.170 [2024-12-07 22:39:08.801111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.170  [2024-12-07T22:39:09.194Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.428 00:07:54.428 22:39:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.428 22:39:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1733611147 )) 00:07:54.428 22:39:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.428 22:39:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1733611147 )) 00:07:54.428 22:39:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.428 [2024-12-07 22:39:09.047047] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.428 [2024-12-07 22:39:09.047153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72548 ] 00:07:54.428 [2024-12-07 22:39:09.188091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.687 [2024-12-07 22:39:09.230726] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.687 [2024-12-07 22:39:09.263968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.687  [2024-12-07T22:39:09.712Z] Copying: 512/512 [B] (average 500 kBps) 00:07:54.946 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1733611149 )) 00:07:54.946 00:07:54.946 real 0m1.946s 00:07:54.946 user 0m0.471s 00:07:54.946 sys 0m0.212s 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.946 ************************************ 00:07:54.946 END TEST dd_flag_noatime_forced_aio 00:07:54.946 ************************************ 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.946 ************************************ 00:07:54.946 START TEST dd_flags_misc_forced_aio 00:07:54.946 ************************************ 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:54.946 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:54.946 [2024-12-07 22:39:09.576891] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:54.946 [2024-12-07 22:39:09.577001] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72580 ] 00:07:55.205 [2024-12-07 22:39:09.717532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.205 [2024-12-07 22:39:09.758412] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.205 [2024-12-07 22:39:09.790179] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.205  [2024-12-07T22:39:09.971Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.205 00:07:55.205 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ddzzdgquoa5e008u0w2gy3ylchp7mguwrr77n380sl9bwjgexpwvibdmvuk39yzi06d765gwnd91hbgc6ddqaa9jycn74blvnmo5jtk4hcno91xmf8az2jmk4ht6nzqwcnti2d0s7smitti2vrmlxwxv8vm94vy77o3dn6advkeqkvmzsdkatkduhdsp5xk6sok005s07rxn8be11zmmtz91d9yzf191m4izochpb6znvdh865rrekkqvfekua7cqkgtoyll3s5r23zppwfzchbqqh7kzploupkd2fowoojikbs948edfptvokmjgnilhwrnq92n3sv4slq6d6kp9scrw13bvg1vt66w6jt9ka2uac0oexh5eq2q2tpq2w33nf34eqlddu9qiu0qkn15djk69luy6jir4h23cpj2jx2ves8o3yyyifmvmtruou6v7rqrnxv0sb981ugqku30b03mwp3c7zu042cuf1m0mod9grezj1wsx0w3swlw1u5j == \d\d\z\z\d\g\q\u\o\a\5\e\0\0\8\u\0\w\2\g\y\3\y\l\c\h\p\7\m\g\u\w\r\r\7\7\n\3\8\0\s\l\9\b\w\j\g\e\x\p\w\v\i\b\d\m\v\u\k\3\9\y\z\i\0\6\d\7\6\5\g\w\n\d\9\1\h\b\g\c\6\d\d\q\a\a\9\j\y\c\n\7\4\b\l\v\n\m\o\5\j\t\k\4\h\c\n\o\9\1\x\m\f\8\a\z\2\j\m\k\4\h\t\6\n\z\q\w\c\n\t\i\2\d\0\s\7\s\m\i\t\t\i\2\v\r\m\l\x\w\x\v\8\v\m\9\4\v\y\7\7\o\3\d\n\6\a\d\v\k\e\q\k\v\m\z\s\d\k\a\t\k\d\u\h\d\s\p\5\x\k\6\s\o\k\0\0\5\s\0\7\r\x\n\8\b\e\1\1\z\m\m\t\z\9\1\d\9\y\z\f\1\9\1\m\4\i\z\o\c\h\p\b\6\z\n\v\d\h\8\6\5\r\r\e\k\k\q\v\f\e\k\u\a\7\c\q\k\g\t\o\y\l\l\3\s\5\r\2\3\z\p\p\w\f\z\c\h\b\q\q\h\7\k\z\p\l\o\u\p\k\d\2\f\o\w\o\o\j\i\k\b\s\9\4\8\e\d\f\p\t\v\o\k\m\j\g\n\i\l\h\w\r\n\q\9\2\n\3\s\v\4\s\l\q\6\d\6\k\p\9\s\c\r\w\1\3\b\v\g\1\v\t\6\6\w\6\j\t\9\k\a\2\u\a\c\0\o\e\x\h\5\e\q\2\q\2\t\p\q\2\w\3\3\n\f\3\4\e\q\l\d\d\u\9\q\i\u\0\q\k\n\1\5\d\j\k\6\9\l\u\y\6\j\i\r\4\h\2\3\c\p\j\2\j\x\2\v\e\s\8\o\3\y\y\y\i\f\m\v\m\t\r\u\o\u\6\v\7\r\q\r\n\x\v\0\s\b\9\8\1\u\g\q\k\u\3\0\b\0\3\m\w\p\3\c\7\z\u\0\4\2\c\u\f\1\m\0\m\o\d\9\g\r\e\z\j\1\w\s\x\0\w\3\s\w\l\w\1\u\5\j ]] 00:07:55.205 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.205 22:39:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:55.463 [2024-12-07 22:39:10.014331] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:55.464 [2024-12-07 22:39:10.014430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72582 ] 00:07:55.464 [2024-12-07 22:39:10.153221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.464 [2024-12-07 22:39:10.194774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.723 [2024-12-07 22:39:10.229650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.723  [2024-12-07T22:39:10.489Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.723 00:07:55.723 22:39:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ddzzdgquoa5e008u0w2gy3ylchp7mguwrr77n380sl9bwjgexpwvibdmvuk39yzi06d765gwnd91hbgc6ddqaa9jycn74blvnmo5jtk4hcno91xmf8az2jmk4ht6nzqwcnti2d0s7smitti2vrmlxwxv8vm94vy77o3dn6advkeqkvmzsdkatkduhdsp5xk6sok005s07rxn8be11zmmtz91d9yzf191m4izochpb6znvdh865rrekkqvfekua7cqkgtoyll3s5r23zppwfzchbqqh7kzploupkd2fowoojikbs948edfptvokmjgnilhwrnq92n3sv4slq6d6kp9scrw13bvg1vt66w6jt9ka2uac0oexh5eq2q2tpq2w33nf34eqlddu9qiu0qkn15djk69luy6jir4h23cpj2jx2ves8o3yyyifmvmtruou6v7rqrnxv0sb981ugqku30b03mwp3c7zu042cuf1m0mod9grezj1wsx0w3swlw1u5j == \d\d\z\z\d\g\q\u\o\a\5\e\0\0\8\u\0\w\2\g\y\3\y\l\c\h\p\7\m\g\u\w\r\r\7\7\n\3\8\0\s\l\9\b\w\j\g\e\x\p\w\v\i\b\d\m\v\u\k\3\9\y\z\i\0\6\d\7\6\5\g\w\n\d\9\1\h\b\g\c\6\d\d\q\a\a\9\j\y\c\n\7\4\b\l\v\n\m\o\5\j\t\k\4\h\c\n\o\9\1\x\m\f\8\a\z\2\j\m\k\4\h\t\6\n\z\q\w\c\n\t\i\2\d\0\s\7\s\m\i\t\t\i\2\v\r\m\l\x\w\x\v\8\v\m\9\4\v\y\7\7\o\3\d\n\6\a\d\v\k\e\q\k\v\m\z\s\d\k\a\t\k\d\u\h\d\s\p\5\x\k\6\s\o\k\0\0\5\s\0\7\r\x\n\8\b\e\1\1\z\m\m\t\z\9\1\d\9\y\z\f\1\9\1\m\4\i\z\o\c\h\p\b\6\z\n\v\d\h\8\6\5\r\r\e\k\k\q\v\f\e\k\u\a\7\c\q\k\g\t\o\y\l\l\3\s\5\r\2\3\z\p\p\w\f\z\c\h\b\q\q\h\7\k\z\p\l\o\u\p\k\d\2\f\o\w\o\o\j\i\k\b\s\9\4\8\e\d\f\p\t\v\o\k\m\j\g\n\i\l\h\w\r\n\q\9\2\n\3\s\v\4\s\l\q\6\d\6\k\p\9\s\c\r\w\1\3\b\v\g\1\v\t\6\6\w\6\j\t\9\k\a\2\u\a\c\0\o\e\x\h\5\e\q\2\q\2\t\p\q\2\w\3\3\n\f\3\4\e\q\l\d\d\u\9\q\i\u\0\q\k\n\1\5\d\j\k\6\9\l\u\y\6\j\i\r\4\h\2\3\c\p\j\2\j\x\2\v\e\s\8\o\3\y\y\y\i\f\m\v\m\t\r\u\o\u\6\v\7\r\q\r\n\x\v\0\s\b\9\8\1\u\g\q\k\u\3\0\b\0\3\m\w\p\3\c\7\z\u\0\4\2\c\u\f\1\m\0\m\o\d\9\g\r\e\z\j\1\w\s\x\0\w\3\s\w\l\w\1\u\5\j ]] 00:07:55.723 22:39:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.723 22:39:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:55.723 [2024-12-07 22:39:10.475177] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:55.723 [2024-12-07 22:39:10.475291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72598 ] 00:07:55.983 [2024-12-07 22:39:10.609900] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.983 [2024-12-07 22:39:10.640602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.983 [2024-12-07 22:39:10.666581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.983  [2024-12-07T22:39:11.008Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.242 00:07:56.242 22:39:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ddzzdgquoa5e008u0w2gy3ylchp7mguwrr77n380sl9bwjgexpwvibdmvuk39yzi06d765gwnd91hbgc6ddqaa9jycn74blvnmo5jtk4hcno91xmf8az2jmk4ht6nzqwcnti2d0s7smitti2vrmlxwxv8vm94vy77o3dn6advkeqkvmzsdkatkduhdsp5xk6sok005s07rxn8be11zmmtz91d9yzf191m4izochpb6znvdh865rrekkqvfekua7cqkgtoyll3s5r23zppwfzchbqqh7kzploupkd2fowoojikbs948edfptvokmjgnilhwrnq92n3sv4slq6d6kp9scrw13bvg1vt66w6jt9ka2uac0oexh5eq2q2tpq2w33nf34eqlddu9qiu0qkn15djk69luy6jir4h23cpj2jx2ves8o3yyyifmvmtruou6v7rqrnxv0sb981ugqku30b03mwp3c7zu042cuf1m0mod9grezj1wsx0w3swlw1u5j == \d\d\z\z\d\g\q\u\o\a\5\e\0\0\8\u\0\w\2\g\y\3\y\l\c\h\p\7\m\g\u\w\r\r\7\7\n\3\8\0\s\l\9\b\w\j\g\e\x\p\w\v\i\b\d\m\v\u\k\3\9\y\z\i\0\6\d\7\6\5\g\w\n\d\9\1\h\b\g\c\6\d\d\q\a\a\9\j\y\c\n\7\4\b\l\v\n\m\o\5\j\t\k\4\h\c\n\o\9\1\x\m\f\8\a\z\2\j\m\k\4\h\t\6\n\z\q\w\c\n\t\i\2\d\0\s\7\s\m\i\t\t\i\2\v\r\m\l\x\w\x\v\8\v\m\9\4\v\y\7\7\o\3\d\n\6\a\d\v\k\e\q\k\v\m\z\s\d\k\a\t\k\d\u\h\d\s\p\5\x\k\6\s\o\k\0\0\5\s\0\7\r\x\n\8\b\e\1\1\z\m\m\t\z\9\1\d\9\y\z\f\1\9\1\m\4\i\z\o\c\h\p\b\6\z\n\v\d\h\8\6\5\r\r\e\k\k\q\v\f\e\k\u\a\7\c\q\k\g\t\o\y\l\l\3\s\5\r\2\3\z\p\p\w\f\z\c\h\b\q\q\h\7\k\z\p\l\o\u\p\k\d\2\f\o\w\o\o\j\i\k\b\s\9\4\8\e\d\f\p\t\v\o\k\m\j\g\n\i\l\h\w\r\n\q\9\2\n\3\s\v\4\s\l\q\6\d\6\k\p\9\s\c\r\w\1\3\b\v\g\1\v\t\6\6\w\6\j\t\9\k\a\2\u\a\c\0\o\e\x\h\5\e\q\2\q\2\t\p\q\2\w\3\3\n\f\3\4\e\q\l\d\d\u\9\q\i\u\0\q\k\n\1\5\d\j\k\6\9\l\u\y\6\j\i\r\4\h\2\3\c\p\j\2\j\x\2\v\e\s\8\o\3\y\y\y\i\f\m\v\m\t\r\u\o\u\6\v\7\r\q\r\n\x\v\0\s\b\9\8\1\u\g\q\k\u\3\0\b\0\3\m\w\p\3\c\7\z\u\0\4\2\c\u\f\1\m\0\m\o\d\9\g\r\e\z\j\1\w\s\x\0\w\3\s\w\l\w\1\u\5\j ]] 00:07:56.242 22:39:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.242 22:39:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:56.242 [2024-12-07 22:39:10.874820] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:56.243 [2024-12-07 22:39:10.874932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72600 ] 00:07:56.502 [2024-12-07 22:39:11.011451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.502 [2024-12-07 22:39:11.046462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.502 [2024-12-07 22:39:11.072974] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.502  [2024-12-07T22:39:11.268Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.502 00:07:56.502 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ddzzdgquoa5e008u0w2gy3ylchp7mguwrr77n380sl9bwjgexpwvibdmvuk39yzi06d765gwnd91hbgc6ddqaa9jycn74blvnmo5jtk4hcno91xmf8az2jmk4ht6nzqwcnti2d0s7smitti2vrmlxwxv8vm94vy77o3dn6advkeqkvmzsdkatkduhdsp5xk6sok005s07rxn8be11zmmtz91d9yzf191m4izochpb6znvdh865rrekkqvfekua7cqkgtoyll3s5r23zppwfzchbqqh7kzploupkd2fowoojikbs948edfptvokmjgnilhwrnq92n3sv4slq6d6kp9scrw13bvg1vt66w6jt9ka2uac0oexh5eq2q2tpq2w33nf34eqlddu9qiu0qkn15djk69luy6jir4h23cpj2jx2ves8o3yyyifmvmtruou6v7rqrnxv0sb981ugqku30b03mwp3c7zu042cuf1m0mod9grezj1wsx0w3swlw1u5j == \d\d\z\z\d\g\q\u\o\a\5\e\0\0\8\u\0\w\2\g\y\3\y\l\c\h\p\7\m\g\u\w\r\r\7\7\n\3\8\0\s\l\9\b\w\j\g\e\x\p\w\v\i\b\d\m\v\u\k\3\9\y\z\i\0\6\d\7\6\5\g\w\n\d\9\1\h\b\g\c\6\d\d\q\a\a\9\j\y\c\n\7\4\b\l\v\n\m\o\5\j\t\k\4\h\c\n\o\9\1\x\m\f\8\a\z\2\j\m\k\4\h\t\6\n\z\q\w\c\n\t\i\2\d\0\s\7\s\m\i\t\t\i\2\v\r\m\l\x\w\x\v\8\v\m\9\4\v\y\7\7\o\3\d\n\6\a\d\v\k\e\q\k\v\m\z\s\d\k\a\t\k\d\u\h\d\s\p\5\x\k\6\s\o\k\0\0\5\s\0\7\r\x\n\8\b\e\1\1\z\m\m\t\z\9\1\d\9\y\z\f\1\9\1\m\4\i\z\o\c\h\p\b\6\z\n\v\d\h\8\6\5\r\r\e\k\k\q\v\f\e\k\u\a\7\c\q\k\g\t\o\y\l\l\3\s\5\r\2\3\z\p\p\w\f\z\c\h\b\q\q\h\7\k\z\p\l\o\u\p\k\d\2\f\o\w\o\o\j\i\k\b\s\9\4\8\e\d\f\p\t\v\o\k\m\j\g\n\i\l\h\w\r\n\q\9\2\n\3\s\v\4\s\l\q\6\d\6\k\p\9\s\c\r\w\1\3\b\v\g\1\v\t\6\6\w\6\j\t\9\k\a\2\u\a\c\0\o\e\x\h\5\e\q\2\q\2\t\p\q\2\w\3\3\n\f\3\4\e\q\l\d\d\u\9\q\i\u\0\q\k\n\1\5\d\j\k\6\9\l\u\y\6\j\i\r\4\h\2\3\c\p\j\2\j\x\2\v\e\s\8\o\3\y\y\y\i\f\m\v\m\t\r\u\o\u\6\v\7\r\q\r\n\x\v\0\s\b\9\8\1\u\g\q\k\u\3\0\b\0\3\m\w\p\3\c\7\z\u\0\4\2\c\u\f\1\m\0\m\o\d\9\g\r\e\z\j\1\w\s\x\0\w\3\s\w\l\w\1\u\5\j ]] 00:07:56.502 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:56.502 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:07:56.502 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:56.502 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:56.502 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.502 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:56.761 [2024-12-07 22:39:11.287890] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:56.761 [2024-12-07 22:39:11.287991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72602 ] 00:07:56.761 [2024-12-07 22:39:11.418276] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.761 [2024-12-07 22:39:11.455463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.761 [2024-12-07 22:39:11.484862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.761  [2024-12-07T22:39:11.786Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.020 00:07:57.020 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 48i20bz9rdmps1jkifxg8qm74n6gdgrhabh9wbroprr1cg6we1j55lhlwup48ywnjzrm4z2f14t9tcmt8qp5ehp3oh1oa7zbeazxhtkrml60nud7pdie77syto13wxtfxel2v8u2lfyn9yxjaq1r79tngnew3uabcgzt4zwjny8oon1i92gtf0l94iag0f69e1khadphalb3775aiz8ir0ggz7fim58z7z1556vq4teyrj56yosve1ocrbyx41dtx8qivzqt0wabr4jg8x9iie9peu1ls1s7j51e6z2i6u5zsfn376ig7q8prhr6l0lauvzq6aqcgv3mlolc0ytg0xtaoxbvljvwbx4pn3jstmi7keiug4k26zn6foz3xxa1eli6x8lhspuaqqjpxaazbo737nxqkca37zjwjf820a28orrdq8e6ej2vpqm4wga5tg8gj3j9epl1y7gsf9xe1q2ajij9qm2gj3mg1lrlqq4raa67be70h54hnpkinz15 == \4\8\i\2\0\b\z\9\r\d\m\p\s\1\j\k\i\f\x\g\8\q\m\7\4\n\6\g\d\g\r\h\a\b\h\9\w\b\r\o\p\r\r\1\c\g\6\w\e\1\j\5\5\l\h\l\w\u\p\4\8\y\w\n\j\z\r\m\4\z\2\f\1\4\t\9\t\c\m\t\8\q\p\5\e\h\p\3\o\h\1\o\a\7\z\b\e\a\z\x\h\t\k\r\m\l\6\0\n\u\d\7\p\d\i\e\7\7\s\y\t\o\1\3\w\x\t\f\x\e\l\2\v\8\u\2\l\f\y\n\9\y\x\j\a\q\1\r\7\9\t\n\g\n\e\w\3\u\a\b\c\g\z\t\4\z\w\j\n\y\8\o\o\n\1\i\9\2\g\t\f\0\l\9\4\i\a\g\0\f\6\9\e\1\k\h\a\d\p\h\a\l\b\3\7\7\5\a\i\z\8\i\r\0\g\g\z\7\f\i\m\5\8\z\7\z\1\5\5\6\v\q\4\t\e\y\r\j\5\6\y\o\s\v\e\1\o\c\r\b\y\x\4\1\d\t\x\8\q\i\v\z\q\t\0\w\a\b\r\4\j\g\8\x\9\i\i\e\9\p\e\u\1\l\s\1\s\7\j\5\1\e\6\z\2\i\6\u\5\z\s\f\n\3\7\6\i\g\7\q\8\p\r\h\r\6\l\0\l\a\u\v\z\q\6\a\q\c\g\v\3\m\l\o\l\c\0\y\t\g\0\x\t\a\o\x\b\v\l\j\v\w\b\x\4\p\n\3\j\s\t\m\i\7\k\e\i\u\g\4\k\2\6\z\n\6\f\o\z\3\x\x\a\1\e\l\i\6\x\8\l\h\s\p\u\a\q\q\j\p\x\a\a\z\b\o\7\3\7\n\x\q\k\c\a\3\7\z\j\w\j\f\8\2\0\a\2\8\o\r\r\d\q\8\e\6\e\j\2\v\p\q\m\4\w\g\a\5\t\g\8\g\j\3\j\9\e\p\l\1\y\7\g\s\f\9\x\e\1\q\2\a\j\i\j\9\q\m\2\g\j\3\m\g\1\l\r\l\q\q\4\r\a\a\6\7\b\e\7\0\h\5\4\h\n\p\k\i\n\z\1\5 ]] 00:07:57.020 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.020 22:39:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:57.020 [2024-12-07 22:39:11.707416] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:57.020 [2024-12-07 22:39:11.707517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72615 ] 00:07:57.279 [2024-12-07 22:39:11.843431] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.279 [2024-12-07 22:39:11.877189] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.279 [2024-12-07 22:39:11.903271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.279  [2024-12-07T22:39:12.304Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.538 00:07:57.538 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 48i20bz9rdmps1jkifxg8qm74n6gdgrhabh9wbroprr1cg6we1j55lhlwup48ywnjzrm4z2f14t9tcmt8qp5ehp3oh1oa7zbeazxhtkrml60nud7pdie77syto13wxtfxel2v8u2lfyn9yxjaq1r79tngnew3uabcgzt4zwjny8oon1i92gtf0l94iag0f69e1khadphalb3775aiz8ir0ggz7fim58z7z1556vq4teyrj56yosve1ocrbyx41dtx8qivzqt0wabr4jg8x9iie9peu1ls1s7j51e6z2i6u5zsfn376ig7q8prhr6l0lauvzq6aqcgv3mlolc0ytg0xtaoxbvljvwbx4pn3jstmi7keiug4k26zn6foz3xxa1eli6x8lhspuaqqjpxaazbo737nxqkca37zjwjf820a28orrdq8e6ej2vpqm4wga5tg8gj3j9epl1y7gsf9xe1q2ajij9qm2gj3mg1lrlqq4raa67be70h54hnpkinz15 == \4\8\i\2\0\b\z\9\r\d\m\p\s\1\j\k\i\f\x\g\8\q\m\7\4\n\6\g\d\g\r\h\a\b\h\9\w\b\r\o\p\r\r\1\c\g\6\w\e\1\j\5\5\l\h\l\w\u\p\4\8\y\w\n\j\z\r\m\4\z\2\f\1\4\t\9\t\c\m\t\8\q\p\5\e\h\p\3\o\h\1\o\a\7\z\b\e\a\z\x\h\t\k\r\m\l\6\0\n\u\d\7\p\d\i\e\7\7\s\y\t\o\1\3\w\x\t\f\x\e\l\2\v\8\u\2\l\f\y\n\9\y\x\j\a\q\1\r\7\9\t\n\g\n\e\w\3\u\a\b\c\g\z\t\4\z\w\j\n\y\8\o\o\n\1\i\9\2\g\t\f\0\l\9\4\i\a\g\0\f\6\9\e\1\k\h\a\d\p\h\a\l\b\3\7\7\5\a\i\z\8\i\r\0\g\g\z\7\f\i\m\5\8\z\7\z\1\5\5\6\v\q\4\t\e\y\r\j\5\6\y\o\s\v\e\1\o\c\r\b\y\x\4\1\d\t\x\8\q\i\v\z\q\t\0\w\a\b\r\4\j\g\8\x\9\i\i\e\9\p\e\u\1\l\s\1\s\7\j\5\1\e\6\z\2\i\6\u\5\z\s\f\n\3\7\6\i\g\7\q\8\p\r\h\r\6\l\0\l\a\u\v\z\q\6\a\q\c\g\v\3\m\l\o\l\c\0\y\t\g\0\x\t\a\o\x\b\v\l\j\v\w\b\x\4\p\n\3\j\s\t\m\i\7\k\e\i\u\g\4\k\2\6\z\n\6\f\o\z\3\x\x\a\1\e\l\i\6\x\8\l\h\s\p\u\a\q\q\j\p\x\a\a\z\b\o\7\3\7\n\x\q\k\c\a\3\7\z\j\w\j\f\8\2\0\a\2\8\o\r\r\d\q\8\e\6\e\j\2\v\p\q\m\4\w\g\a\5\t\g\8\g\j\3\j\9\e\p\l\1\y\7\g\s\f\9\x\e\1\q\2\a\j\i\j\9\q\m\2\g\j\3\m\g\1\l\r\l\q\q\4\r\a\a\6\7\b\e\7\0\h\5\4\h\n\p\k\i\n\z\1\5 ]] 00:07:57.538 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.538 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:57.538 [2024-12-07 22:39:12.109421] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:57.538 [2024-12-07 22:39:12.109551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72617 ] 00:07:57.538 [2024-12-07 22:39:12.244049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.538 [2024-12-07 22:39:12.274474] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.538 [2024-12-07 22:39:12.301462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.799  [2024-12-07T22:39:12.565Z] Copying: 512/512 [B] (average 166 kBps) 00:07:57.799 00:07:57.799 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 48i20bz9rdmps1jkifxg8qm74n6gdgrhabh9wbroprr1cg6we1j55lhlwup48ywnjzrm4z2f14t9tcmt8qp5ehp3oh1oa7zbeazxhtkrml60nud7pdie77syto13wxtfxel2v8u2lfyn9yxjaq1r79tngnew3uabcgzt4zwjny8oon1i92gtf0l94iag0f69e1khadphalb3775aiz8ir0ggz7fim58z7z1556vq4teyrj56yosve1ocrbyx41dtx8qivzqt0wabr4jg8x9iie9peu1ls1s7j51e6z2i6u5zsfn376ig7q8prhr6l0lauvzq6aqcgv3mlolc0ytg0xtaoxbvljvwbx4pn3jstmi7keiug4k26zn6foz3xxa1eli6x8lhspuaqqjpxaazbo737nxqkca37zjwjf820a28orrdq8e6ej2vpqm4wga5tg8gj3j9epl1y7gsf9xe1q2ajij9qm2gj3mg1lrlqq4raa67be70h54hnpkinz15 == \4\8\i\2\0\b\z\9\r\d\m\p\s\1\j\k\i\f\x\g\8\q\m\7\4\n\6\g\d\g\r\h\a\b\h\9\w\b\r\o\p\r\r\1\c\g\6\w\e\1\j\5\5\l\h\l\w\u\p\4\8\y\w\n\j\z\r\m\4\z\2\f\1\4\t\9\t\c\m\t\8\q\p\5\e\h\p\3\o\h\1\o\a\7\z\b\e\a\z\x\h\t\k\r\m\l\6\0\n\u\d\7\p\d\i\e\7\7\s\y\t\o\1\3\w\x\t\f\x\e\l\2\v\8\u\2\l\f\y\n\9\y\x\j\a\q\1\r\7\9\t\n\g\n\e\w\3\u\a\b\c\g\z\t\4\z\w\j\n\y\8\o\o\n\1\i\9\2\g\t\f\0\l\9\4\i\a\g\0\f\6\9\e\1\k\h\a\d\p\h\a\l\b\3\7\7\5\a\i\z\8\i\r\0\g\g\z\7\f\i\m\5\8\z\7\z\1\5\5\6\v\q\4\t\e\y\r\j\5\6\y\o\s\v\e\1\o\c\r\b\y\x\4\1\d\t\x\8\q\i\v\z\q\t\0\w\a\b\r\4\j\g\8\x\9\i\i\e\9\p\e\u\1\l\s\1\s\7\j\5\1\e\6\z\2\i\6\u\5\z\s\f\n\3\7\6\i\g\7\q\8\p\r\h\r\6\l\0\l\a\u\v\z\q\6\a\q\c\g\v\3\m\l\o\l\c\0\y\t\g\0\x\t\a\o\x\b\v\l\j\v\w\b\x\4\p\n\3\j\s\t\m\i\7\k\e\i\u\g\4\k\2\6\z\n\6\f\o\z\3\x\x\a\1\e\l\i\6\x\8\l\h\s\p\u\a\q\q\j\p\x\a\a\z\b\o\7\3\7\n\x\q\k\c\a\3\7\z\j\w\j\f\8\2\0\a\2\8\o\r\r\d\q\8\e\6\e\j\2\v\p\q\m\4\w\g\a\5\t\g\8\g\j\3\j\9\e\p\l\1\y\7\g\s\f\9\x\e\1\q\2\a\j\i\j\9\q\m\2\g\j\3\m\g\1\l\r\l\q\q\4\r\a\a\6\7\b\e\7\0\h\5\4\h\n\p\k\i\n\z\1\5 ]] 00:07:57.799 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.799 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:57.799 [2024-12-07 22:39:12.520788] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:57.799 [2024-12-07 22:39:12.520900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72626 ] 00:07:58.058 [2024-12-07 22:39:12.656659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.058 [2024-12-07 22:39:12.687775] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.058 [2024-12-07 22:39:12.713716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.058  [2024-12-07T22:39:13.084Z] Copying: 512/512 [B] (average 500 kBps) 00:07:58.318 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 48i20bz9rdmps1jkifxg8qm74n6gdgrhabh9wbroprr1cg6we1j55lhlwup48ywnjzrm4z2f14t9tcmt8qp5ehp3oh1oa7zbeazxhtkrml60nud7pdie77syto13wxtfxel2v8u2lfyn9yxjaq1r79tngnew3uabcgzt4zwjny8oon1i92gtf0l94iag0f69e1khadphalb3775aiz8ir0ggz7fim58z7z1556vq4teyrj56yosve1ocrbyx41dtx8qivzqt0wabr4jg8x9iie9peu1ls1s7j51e6z2i6u5zsfn376ig7q8prhr6l0lauvzq6aqcgv3mlolc0ytg0xtaoxbvljvwbx4pn3jstmi7keiug4k26zn6foz3xxa1eli6x8lhspuaqqjpxaazbo737nxqkca37zjwjf820a28orrdq8e6ej2vpqm4wga5tg8gj3j9epl1y7gsf9xe1q2ajij9qm2gj3mg1lrlqq4raa67be70h54hnpkinz15 == \4\8\i\2\0\b\z\9\r\d\m\p\s\1\j\k\i\f\x\g\8\q\m\7\4\n\6\g\d\g\r\h\a\b\h\9\w\b\r\o\p\r\r\1\c\g\6\w\e\1\j\5\5\l\h\l\w\u\p\4\8\y\w\n\j\z\r\m\4\z\2\f\1\4\t\9\t\c\m\t\8\q\p\5\e\h\p\3\o\h\1\o\a\7\z\b\e\a\z\x\h\t\k\r\m\l\6\0\n\u\d\7\p\d\i\e\7\7\s\y\t\o\1\3\w\x\t\f\x\e\l\2\v\8\u\2\l\f\y\n\9\y\x\j\a\q\1\r\7\9\t\n\g\n\e\w\3\u\a\b\c\g\z\t\4\z\w\j\n\y\8\o\o\n\1\i\9\2\g\t\f\0\l\9\4\i\a\g\0\f\6\9\e\1\k\h\a\d\p\h\a\l\b\3\7\7\5\a\i\z\8\i\r\0\g\g\z\7\f\i\m\5\8\z\7\z\1\5\5\6\v\q\4\t\e\y\r\j\5\6\y\o\s\v\e\1\o\c\r\b\y\x\4\1\d\t\x\8\q\i\v\z\q\t\0\w\a\b\r\4\j\g\8\x\9\i\i\e\9\p\e\u\1\l\s\1\s\7\j\5\1\e\6\z\2\i\6\u\5\z\s\f\n\3\7\6\i\g\7\q\8\p\r\h\r\6\l\0\l\a\u\v\z\q\6\a\q\c\g\v\3\m\l\o\l\c\0\y\t\g\0\x\t\a\o\x\b\v\l\j\v\w\b\x\4\p\n\3\j\s\t\m\i\7\k\e\i\u\g\4\k\2\6\z\n\6\f\o\z\3\x\x\a\1\e\l\i\6\x\8\l\h\s\p\u\a\q\q\j\p\x\a\a\z\b\o\7\3\7\n\x\q\k\c\a\3\7\z\j\w\j\f\8\2\0\a\2\8\o\r\r\d\q\8\e\6\e\j\2\v\p\q\m\4\w\g\a\5\t\g\8\g\j\3\j\9\e\p\l\1\y\7\g\s\f\9\x\e\1\q\2\a\j\i\j\9\q\m\2\g\j\3\m\g\1\l\r\l\q\q\4\r\a\a\6\7\b\e\7\0\h\5\4\h\n\p\k\i\n\z\1\5 ]] 00:07:58.318 00:07:58.318 real 0m3.364s 00:07:58.318 user 0m1.644s 00:07:58.318 sys 0m0.758s 00:07:58.318 ************************************ 00:07:58.318 END TEST dd_flags_misc_forced_aio 00:07:58.318 ************************************ 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:58.318 00:07:58.318 real 0m15.650s 00:07:58.318 user 0m6.565s 00:07:58.318 sys 0m4.360s 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.318 22:39:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.318 ************************************ 00:07:58.318 END TEST spdk_dd_posix 00:07:58.318 ************************************ 00:07:58.318 22:39:12 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:58.318 22:39:12 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.318 22:39:12 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.318 22:39:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:58.318 ************************************ 00:07:58.318 START TEST spdk_dd_malloc 00:07:58.318 ************************************ 00:07:58.318 22:39:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:07:58.318 * Looking for test storage... 00:07:58.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:58.318 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:58.318 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:58.318 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:58.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.578 --rc genhtml_branch_coverage=1 00:07:58.578 --rc genhtml_function_coverage=1 00:07:58.578 --rc genhtml_legend=1 00:07:58.578 --rc geninfo_all_blocks=1 00:07:58.578 --rc geninfo_unexecuted_blocks=1 00:07:58.578 00:07:58.578 ' 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:58.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.578 --rc genhtml_branch_coverage=1 00:07:58.578 --rc genhtml_function_coverage=1 00:07:58.578 --rc genhtml_legend=1 00:07:58.578 --rc geninfo_all_blocks=1 00:07:58.578 --rc geninfo_unexecuted_blocks=1 00:07:58.578 00:07:58.578 ' 00:07:58.578 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:58.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.578 --rc genhtml_branch_coverage=1 00:07:58.578 --rc genhtml_function_coverage=1 00:07:58.578 --rc genhtml_legend=1 00:07:58.578 --rc geninfo_all_blocks=1 00:07:58.579 --rc geninfo_unexecuted_blocks=1 00:07:58.579 00:07:58.579 ' 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:58.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.579 --rc genhtml_branch_coverage=1 00:07:58.579 --rc genhtml_function_coverage=1 00:07:58.579 --rc genhtml_legend=1 00:07:58.579 --rc geninfo_all_blocks=1 00:07:58.579 --rc geninfo_unexecuted_blocks=1 00:07:58.579 00:07:58.579 ' 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:07:58.579 ************************************ 00:07:58.579 START TEST dd_malloc_copy 00:07:58.579 ************************************ 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:07:58.579 22:39:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:07:58.579 [2024-12-07 22:39:13.220677] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:07:58.579 [2024-12-07 22:39:13.220780] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72701 ] 00:07:58.579 { 00:07:58.579 "subsystems": [ 00:07:58.579 { 00:07:58.579 "subsystem": "bdev", 00:07:58.579 "config": [ 00:07:58.579 { 00:07:58.579 "params": { 00:07:58.579 "block_size": 512, 00:07:58.579 "num_blocks": 1048576, 00:07:58.579 "name": "malloc0" 00:07:58.579 }, 00:07:58.579 "method": "bdev_malloc_create" 00:07:58.579 }, 00:07:58.579 { 00:07:58.579 "params": { 00:07:58.579 "block_size": 512, 00:07:58.579 "num_blocks": 1048576, 00:07:58.579 "name": "malloc1" 00:07:58.579 }, 00:07:58.579 "method": "bdev_malloc_create" 00:07:58.579 }, 00:07:58.579 { 00:07:58.579 "method": "bdev_wait_for_examine" 00:07:58.579 } 00:07:58.579 ] 00:07:58.579 } 00:07:58.579 ] 00:07:58.579 } 00:07:58.839 [2024-12-07 22:39:13.351957] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.839 [2024-12-07 22:39:13.383003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.839 [2024-12-07 22:39:13.410919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.218  [2024-12-07T22:39:15.920Z] Copying: 239/512 [MB] (239 MBps) [2024-12-07T22:39:15.920Z] Copying: 474/512 [MB] (234 MBps) [2024-12-07T22:39:16.179Z] Copying: 512/512 [MB] (average 234 MBps) 00:08:01.413 00:08:01.413 22:39:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:01.413 22:39:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:01.413 22:39:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:01.413 22:39:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:01.413 [2024-12-07 22:39:16.147462] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:01.413 [2024-12-07 22:39:16.147568] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72743 ] 00:08:01.413 { 00:08:01.413 "subsystems": [ 00:08:01.413 { 00:08:01.413 "subsystem": "bdev", 00:08:01.413 "config": [ 00:08:01.413 { 00:08:01.413 "params": { 00:08:01.413 "block_size": 512, 00:08:01.413 "num_blocks": 1048576, 00:08:01.413 "name": "malloc0" 00:08:01.413 }, 00:08:01.413 "method": "bdev_malloc_create" 00:08:01.413 }, 00:08:01.413 { 00:08:01.413 "params": { 00:08:01.413 "block_size": 512, 00:08:01.413 "num_blocks": 1048576, 00:08:01.413 "name": "malloc1" 00:08:01.413 }, 00:08:01.413 "method": "bdev_malloc_create" 00:08:01.413 }, 00:08:01.413 { 00:08:01.413 "method": "bdev_wait_for_examine" 00:08:01.413 } 00:08:01.413 ] 00:08:01.413 } 00:08:01.413 ] 00:08:01.413 } 00:08:01.671 [2024-12-07 22:39:16.277369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.671 [2024-12-07 22:39:16.308502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.672 [2024-12-07 22:39:16.335163] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.050  [2024-12-07T22:39:18.756Z] Copying: 234/512 [MB] (234 MBps) [2024-12-07T22:39:18.756Z] Copying: 475/512 [MB] (240 MBps) [2024-12-07T22:39:19.326Z] Copying: 512/512 [MB] (average 237 MBps) 00:08:04.560 00:08:04.560 00:08:04.560 real 0m5.850s 00:08:04.560 user 0m5.235s 00:08:04.560 sys 0m0.481s 00:08:04.560 22:39:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.560 ************************************ 00:08:04.560 END TEST dd_malloc_copy 00:08:04.560 ************************************ 00:08:04.560 22:39:19 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:04.560 00:08:04.560 real 0m6.092s 00:08:04.560 user 0m5.370s 00:08:04.560 sys 0m0.591s 00:08:04.560 22:39:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.560 22:39:19 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:04.560 ************************************ 00:08:04.560 END TEST spdk_dd_malloc 00:08:04.560 ************************************ 00:08:04.560 22:39:19 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:04.560 22:39:19 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:04.560 22:39:19 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.560 22:39:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:04.560 ************************************ 00:08:04.560 START TEST spdk_dd_bdev_to_bdev 00:08:04.560 ************************************ 00:08:04.560 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:04.560 * Looking for test storage... 00:08:04.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:04.560 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:04.560 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:04.560 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:04.560 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:04.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.561 --rc genhtml_branch_coverage=1 00:08:04.561 --rc genhtml_function_coverage=1 00:08:04.561 --rc genhtml_legend=1 00:08:04.561 --rc geninfo_all_blocks=1 00:08:04.561 --rc geninfo_unexecuted_blocks=1 00:08:04.561 00:08:04.561 ' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:04.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.561 --rc genhtml_branch_coverage=1 00:08:04.561 --rc genhtml_function_coverage=1 00:08:04.561 --rc genhtml_legend=1 00:08:04.561 --rc geninfo_all_blocks=1 00:08:04.561 --rc geninfo_unexecuted_blocks=1 00:08:04.561 00:08:04.561 ' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:04.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.561 --rc genhtml_branch_coverage=1 00:08:04.561 --rc genhtml_function_coverage=1 00:08:04.561 --rc genhtml_legend=1 00:08:04.561 --rc geninfo_all_blocks=1 00:08:04.561 --rc geninfo_unexecuted_blocks=1 00:08:04.561 00:08:04.561 ' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:04.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.561 --rc genhtml_branch_coverage=1 00:08:04.561 --rc genhtml_function_coverage=1 00:08:04.561 --rc genhtml_legend=1 00:08:04.561 --rc geninfo_all_blocks=1 00:08:04.561 --rc geninfo_unexecuted_blocks=1 00:08:04.561 00:08:04.561 ' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:04.561 ************************************ 00:08:04.561 START TEST dd_inflate_file 00:08:04.561 ************************************ 00:08:04.561 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:04.859 [2024-12-07 22:39:19.366962] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:04.859 [2024-12-07 22:39:19.367075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72850 ] 00:08:04.859 [2024-12-07 22:39:19.507143] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.859 [2024-12-07 22:39:19.546652] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.859 [2024-12-07 22:39:19.576097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.136  [2024-12-07T22:39:19.902Z] Copying: 64/64 [MB] (average 1488 MBps) 00:08:05.136 00:08:05.136 00:08:05.136 real 0m0.452s 00:08:05.136 user 0m0.248s 00:08:05.136 sys 0m0.225s 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.136 ************************************ 00:08:05.136 END TEST dd_inflate_file 00:08:05.136 ************************************ 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:05.136 ************************************ 00:08:05.136 START TEST dd_copy_to_out_bdev 00:08:05.136 ************************************ 00:08:05.136 22:39:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:05.136 { 00:08:05.136 "subsystems": [ 00:08:05.136 { 00:08:05.136 "subsystem": "bdev", 00:08:05.136 "config": [ 00:08:05.136 { 00:08:05.136 "params": { 00:08:05.136 "trtype": "pcie", 00:08:05.136 "traddr": "0000:00:10.0", 00:08:05.136 "name": "Nvme0" 00:08:05.136 }, 00:08:05.136 "method": "bdev_nvme_attach_controller" 00:08:05.136 }, 00:08:05.136 { 00:08:05.136 "params": { 00:08:05.136 "trtype": "pcie", 00:08:05.136 "traddr": "0000:00:11.0", 00:08:05.136 "name": "Nvme1" 00:08:05.136 }, 00:08:05.136 "method": "bdev_nvme_attach_controller" 00:08:05.136 }, 00:08:05.136 { 00:08:05.136 "method": "bdev_wait_for_examine" 00:08:05.136 } 00:08:05.136 ] 00:08:05.136 } 00:08:05.136 ] 00:08:05.136 } 00:08:05.136 [2024-12-07 22:39:19.875441] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:05.136 [2024-12-07 22:39:19.876006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72884 ] 00:08:05.395 [2024-12-07 22:39:20.010990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.395 [2024-12-07 22:39:20.044876] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.395 [2024-12-07 22:39:20.072769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.772  [2024-12-07T22:39:21.538Z] Copying: 53/64 [MB] (53 MBps) [2024-12-07T22:39:21.803Z] Copying: 64/64 [MB] (average 53 MBps) 00:08:07.037 00:08:07.037 00:08:07.037 real 0m1.765s 00:08:07.037 user 0m1.599s 00:08:07.037 sys 0m1.408s 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.037 ************************************ 00:08:07.037 END TEST dd_copy_to_out_bdev 00:08:07.037 ************************************ 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:07.037 ************************************ 00:08:07.037 START TEST dd_offset_magic 00:08:07.037 ************************************ 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:07.037 22:39:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:07.037 [2024-12-07 22:39:21.706988] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:07.037 [2024-12-07 22:39:21.707134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72923 ] 00:08:07.037 { 00:08:07.037 "subsystems": [ 00:08:07.037 { 00:08:07.037 "subsystem": "bdev", 00:08:07.037 "config": [ 00:08:07.037 { 00:08:07.037 "params": { 00:08:07.037 "trtype": "pcie", 00:08:07.037 "traddr": "0000:00:10.0", 00:08:07.037 "name": "Nvme0" 00:08:07.037 }, 00:08:07.037 "method": "bdev_nvme_attach_controller" 00:08:07.037 }, 00:08:07.037 { 00:08:07.037 "params": { 00:08:07.037 "trtype": "pcie", 00:08:07.037 "traddr": "0000:00:11.0", 00:08:07.037 "name": "Nvme1" 00:08:07.037 }, 00:08:07.037 "method": "bdev_nvme_attach_controller" 00:08:07.037 }, 00:08:07.037 { 00:08:07.037 "method": "bdev_wait_for_examine" 00:08:07.037 } 00:08:07.037 ] 00:08:07.037 } 00:08:07.037 ] 00:08:07.037 } 00:08:07.294 [2024-12-07 22:39:21.849797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.294 [2024-12-07 22:39:21.881565] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.294 [2024-12-07 22:39:21.907867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.552  [2024-12-07T22:39:22.575Z] Copying: 65/65 [MB] (average 942 MBps) 00:08:07.809 00:08:07.809 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:07.809 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:07.809 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:07.809 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:07.809 [2024-12-07 22:39:22.398515] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:07.809 [2024-12-07 22:39:22.398654] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72943 ] 00:08:07.809 { 00:08:07.809 "subsystems": [ 00:08:07.809 { 00:08:07.809 "subsystem": "bdev", 00:08:07.809 "config": [ 00:08:07.809 { 00:08:07.809 "params": { 00:08:07.809 "trtype": "pcie", 00:08:07.809 "traddr": "0000:00:10.0", 00:08:07.809 "name": "Nvme0" 00:08:07.809 }, 00:08:07.809 "method": "bdev_nvme_attach_controller" 00:08:07.809 }, 00:08:07.809 { 00:08:07.809 "params": { 00:08:07.809 "trtype": "pcie", 00:08:07.809 "traddr": "0000:00:11.0", 00:08:07.809 "name": "Nvme1" 00:08:07.809 }, 00:08:07.809 "method": "bdev_nvme_attach_controller" 00:08:07.809 }, 00:08:07.809 { 00:08:07.809 "method": "bdev_wait_for_examine" 00:08:07.809 } 00:08:07.809 ] 00:08:07.809 } 00:08:07.809 ] 00:08:07.809 } 00:08:07.810 [2024-12-07 22:39:22.540081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.067 [2024-12-07 22:39:22.579821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.067 [2024-12-07 22:39:22.613074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.067  [2024-12-07T22:39:23.091Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:08.325 00:08:08.325 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:08.325 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:08.325 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:08.325 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:08.325 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:08.325 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:08.325 22:39:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:08.325 [2024-12-07 22:39:22.980217] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:08.325 [2024-12-07 22:39:22.980318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72960 ] 00:08:08.325 { 00:08:08.325 "subsystems": [ 00:08:08.325 { 00:08:08.325 "subsystem": "bdev", 00:08:08.325 "config": [ 00:08:08.325 { 00:08:08.325 "params": { 00:08:08.325 "trtype": "pcie", 00:08:08.325 "traddr": "0000:00:10.0", 00:08:08.325 "name": "Nvme0" 00:08:08.325 }, 00:08:08.325 "method": "bdev_nvme_attach_controller" 00:08:08.325 }, 00:08:08.325 { 00:08:08.325 "params": { 00:08:08.325 "trtype": "pcie", 00:08:08.325 "traddr": "0000:00:11.0", 00:08:08.325 "name": "Nvme1" 00:08:08.325 }, 00:08:08.325 "method": "bdev_nvme_attach_controller" 00:08:08.325 }, 00:08:08.325 { 00:08:08.325 "method": "bdev_wait_for_examine" 00:08:08.325 } 00:08:08.325 ] 00:08:08.325 } 00:08:08.325 ] 00:08:08.325 } 00:08:08.582 [2024-12-07 22:39:23.118379] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.582 [2024-12-07 22:39:23.151279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.582 [2024-12-07 22:39:23.178361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.840  [2024-12-07T22:39:23.606Z] Copying: 65/65 [MB] (average 1065 MBps) 00:08:08.840 00:08:08.840 22:39:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:08.840 22:39:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:08.840 22:39:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:08.840 22:39:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:09.099 [2024-12-07 22:39:23.616960] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:09.099 [2024-12-07 22:39:23.617046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72974 ] 00:08:09.099 { 00:08:09.099 "subsystems": [ 00:08:09.099 { 00:08:09.099 "subsystem": "bdev", 00:08:09.099 "config": [ 00:08:09.099 { 00:08:09.099 "params": { 00:08:09.099 "trtype": "pcie", 00:08:09.099 "traddr": "0000:00:10.0", 00:08:09.099 "name": "Nvme0" 00:08:09.099 }, 00:08:09.099 "method": "bdev_nvme_attach_controller" 00:08:09.099 }, 00:08:09.099 { 00:08:09.099 "params": { 00:08:09.099 "trtype": "pcie", 00:08:09.099 "traddr": "0000:00:11.0", 00:08:09.099 "name": "Nvme1" 00:08:09.099 }, 00:08:09.099 "method": "bdev_nvme_attach_controller" 00:08:09.099 }, 00:08:09.099 { 00:08:09.099 "method": "bdev_wait_for_examine" 00:08:09.099 } 00:08:09.099 ] 00:08:09.099 } 00:08:09.099 ] 00:08:09.099 } 00:08:09.099 [2024-12-07 22:39:23.754086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.099 [2024-12-07 22:39:23.786107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.099 [2024-12-07 22:39:23.815546] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.358  [2024-12-07T22:39:24.124Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:09.358 00:08:09.358 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:09.358 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:09.358 00:08:09.358 real 0m2.467s 00:08:09.358 user 0m1.833s 00:08:09.358 sys 0m0.656s 00:08:09.358 ************************************ 00:08:09.358 END TEST dd_offset_magic 00:08:09.358 ************************************ 00:08:09.358 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.358 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:09.617 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:09.617 [2024-12-07 22:39:24.203711] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:09.617 [2024-12-07 22:39:24.203796] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73011 ] 00:08:09.617 { 00:08:09.617 "subsystems": [ 00:08:09.617 { 00:08:09.617 "subsystem": "bdev", 00:08:09.617 "config": [ 00:08:09.617 { 00:08:09.617 "params": { 00:08:09.617 "trtype": "pcie", 00:08:09.617 "traddr": "0000:00:10.0", 00:08:09.617 "name": "Nvme0" 00:08:09.617 }, 00:08:09.617 "method": "bdev_nvme_attach_controller" 00:08:09.617 }, 00:08:09.617 { 00:08:09.617 "params": { 00:08:09.617 "trtype": "pcie", 00:08:09.617 "traddr": "0000:00:11.0", 00:08:09.617 "name": "Nvme1" 00:08:09.617 }, 00:08:09.617 "method": "bdev_nvme_attach_controller" 00:08:09.617 }, 00:08:09.617 { 00:08:09.617 "method": "bdev_wait_for_examine" 00:08:09.617 } 00:08:09.617 ] 00:08:09.617 } 00:08:09.617 ] 00:08:09.617 } 00:08:09.617 [2024-12-07 22:39:24.341293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.617 [2024-12-07 22:39:24.371707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.876 [2024-12-07 22:39:24.399337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.876  [2024-12-07T22:39:24.901Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:10.135 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:10.135 22:39:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:10.135 [2024-12-07 22:39:24.735563] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.135 [2024-12-07 22:39:24.735688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73021 ] 00:08:10.135 { 00:08:10.135 "subsystems": [ 00:08:10.135 { 00:08:10.135 "subsystem": "bdev", 00:08:10.135 "config": [ 00:08:10.135 { 00:08:10.135 "params": { 00:08:10.135 "trtype": "pcie", 00:08:10.135 "traddr": "0000:00:10.0", 00:08:10.135 "name": "Nvme0" 00:08:10.135 }, 00:08:10.135 "method": "bdev_nvme_attach_controller" 00:08:10.135 }, 00:08:10.135 { 00:08:10.135 "params": { 00:08:10.135 "trtype": "pcie", 00:08:10.135 "traddr": "0000:00:11.0", 00:08:10.135 "name": "Nvme1" 00:08:10.135 }, 00:08:10.135 "method": "bdev_nvme_attach_controller" 00:08:10.135 }, 00:08:10.135 { 00:08:10.135 "method": "bdev_wait_for_examine" 00:08:10.135 } 00:08:10.135 ] 00:08:10.135 } 00:08:10.135 ] 00:08:10.135 } 00:08:10.136 [2024-12-07 22:39:24.873578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.394 [2024-12-07 22:39:24.906433] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.394 [2024-12-07 22:39:24.933189] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.394  [2024-12-07T22:39:25.419Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:08:10.653 00:08:10.653 22:39:25 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:10.653 00:08:10.653 real 0m6.138s 00:08:10.653 user 0m4.629s 00:08:10.653 sys 0m2.838s 00:08:10.653 22:39:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.653 ************************************ 00:08:10.653 END TEST spdk_dd_bdev_to_bdev 00:08:10.653 22:39:25 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:10.653 ************************************ 00:08:10.653 22:39:25 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:10.653 22:39:25 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:10.653 22:39:25 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.653 22:39:25 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.653 22:39:25 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:10.653 ************************************ 00:08:10.653 START TEST spdk_dd_uring 00:08:10.653 ************************************ 00:08:10.653 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:10.653 * Looking for test storage... 00:08:10.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:10.653 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:10.653 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:10.653 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.912 --rc genhtml_branch_coverage=1 00:08:10.912 --rc genhtml_function_coverage=1 00:08:10.912 --rc genhtml_legend=1 00:08:10.912 --rc geninfo_all_blocks=1 00:08:10.912 --rc geninfo_unexecuted_blocks=1 00:08:10.912 00:08:10.912 ' 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.912 --rc genhtml_branch_coverage=1 00:08:10.912 --rc genhtml_function_coverage=1 00:08:10.912 --rc genhtml_legend=1 00:08:10.912 --rc geninfo_all_blocks=1 00:08:10.912 --rc geninfo_unexecuted_blocks=1 00:08:10.912 00:08:10.912 ' 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.912 --rc genhtml_branch_coverage=1 00:08:10.912 --rc genhtml_function_coverage=1 00:08:10.912 --rc genhtml_legend=1 00:08:10.912 --rc geninfo_all_blocks=1 00:08:10.912 --rc geninfo_unexecuted_blocks=1 00:08:10.912 00:08:10.912 ' 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:10.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.912 --rc genhtml_branch_coverage=1 00:08:10.912 --rc genhtml_function_coverage=1 00:08:10.912 --rc genhtml_legend=1 00:08:10.912 --rc geninfo_all_blocks=1 00:08:10.912 --rc geninfo_unexecuted_blocks=1 00:08:10.912 00:08:10.912 ' 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.912 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:10.913 ************************************ 00:08:10.913 START TEST dd_uring_copy 00:08:10.913 ************************************ 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=ipia0aixt6y5jp1f9gt95ioc9qrkan5oh6awk66staxgrqih68nfxjjz3h27r1dpz4f7kxf9gsig364onf7xy5p62fzmq4kh1h9szhh3t8x4x7mhheh3mrsnm3f026da2a105z19dtohclzceaam7swauhr475nvljnj007s2rm87q40ywvkum68r5wrwsy76zapghpbdnky3cr5e8r94natf4eelz4mleenm3wxwm7hu4by2nlg7uoq4aletf63uhivuknnd29vyhl3rzs9xkkqff301prmrtaq1019bfx835i0ywxyxg9ulmg3q4g04szrn5dvqkweylpox0bxx9xgjer2ylig7dthuie6jp848zep2czpyux8letnf8i2ftx3ekdvmh967y3icnj6gi5n1sd8rg9denbgtpbf32rvar51bp3i7rxherwsf4m190tueuypbxjo3bx9zt4e502irftvwkbodbpf2b45y7w057oqprm3hpw3wcvb59fh03c3u3jkdtrib1potrgnnmr6j9hemhcschdqa6mn122bwn1iqrrhl5hkr9oo7l1huiup08ih7eqxk6s4gaimzdi2el0u90z6yx9yjgya9b4karol1l5z8z0y1l02cgq35ahh6zz0h8cywduzzt2b9c1iwstkji5p0beg9fty8bey74jbnz8ih53gkpxqmuynh4vhk21z2q0ra3jmoutfg2tkx5dkxkn24jivvn1xrq7vtwb47rkbabtb8fkfiu7zx94rhf1ql9c0347p0wu1p7h554pk7g1kjn5nmfxoswv4ni2z7j9ys173zwehq0cr5yyg8l6uybexm6l5kfm35xjoqnnyawztm17im4b6ah2cccx1nccvo9jwfpttukyuqtsx2yngqtnudda6tic3f8ici6cbhmzy5ngtnde1ed4ccp3e86f0x6ha3dy8nkyhih9fkh1e5v95egomct0dpoja1j165x4mvopm8svlbu36160j8q1htsnpzksdcu0g 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo ipia0aixt6y5jp1f9gt95ioc9qrkan5oh6awk66staxgrqih68nfxjjz3h27r1dpz4f7kxf9gsig364onf7xy5p62fzmq4kh1h9szhh3t8x4x7mhheh3mrsnm3f026da2a105z19dtohclzceaam7swauhr475nvljnj007s2rm87q40ywvkum68r5wrwsy76zapghpbdnky3cr5e8r94natf4eelz4mleenm3wxwm7hu4by2nlg7uoq4aletf63uhivuknnd29vyhl3rzs9xkkqff301prmrtaq1019bfx835i0ywxyxg9ulmg3q4g04szrn5dvqkweylpox0bxx9xgjer2ylig7dthuie6jp848zep2czpyux8letnf8i2ftx3ekdvmh967y3icnj6gi5n1sd8rg9denbgtpbf32rvar51bp3i7rxherwsf4m190tueuypbxjo3bx9zt4e502irftvwkbodbpf2b45y7w057oqprm3hpw3wcvb59fh03c3u3jkdtrib1potrgnnmr6j9hemhcschdqa6mn122bwn1iqrrhl5hkr9oo7l1huiup08ih7eqxk6s4gaimzdi2el0u90z6yx9yjgya9b4karol1l5z8z0y1l02cgq35ahh6zz0h8cywduzzt2b9c1iwstkji5p0beg9fty8bey74jbnz8ih53gkpxqmuynh4vhk21z2q0ra3jmoutfg2tkx5dkxkn24jivvn1xrq7vtwb47rkbabtb8fkfiu7zx94rhf1ql9c0347p0wu1p7h554pk7g1kjn5nmfxoswv4ni2z7j9ys173zwehq0cr5yyg8l6uybexm6l5kfm35xjoqnnyawztm17im4b6ah2cccx1nccvo9jwfpttukyuqtsx2yngqtnudda6tic3f8ici6cbhmzy5ngtnde1ed4ccp3e86f0x6ha3dy8nkyhih9fkh1e5v95egomct0dpoja1j165x4mvopm8svlbu36160j8q1htsnpzksdcu0g 00:08:10.913 22:39:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:10.913 [2024-12-07 22:39:25.636976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:10.913 [2024-12-07 22:39:25.637094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73099 ] 00:08:11.171 [2024-12-07 22:39:25.772599] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.171 [2024-12-07 22:39:25.812481] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.171 [2024-12-07 22:39:25.843830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:11.775  [2024-12-07T22:39:26.799Z] Copying: 511/511 [MB] (average 1368 MBps) 00:08:12.033 00:08:12.033 22:39:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:12.033 22:39:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:12.033 22:39:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:12.033 22:39:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.033 [2024-12-07 22:39:26.620429] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:12.033 [2024-12-07 22:39:26.620527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73115 ] 00:08:12.033 { 00:08:12.033 "subsystems": [ 00:08:12.033 { 00:08:12.033 "subsystem": "bdev", 00:08:12.033 "config": [ 00:08:12.033 { 00:08:12.033 "params": { 00:08:12.033 "block_size": 512, 00:08:12.033 "num_blocks": 1048576, 00:08:12.033 "name": "malloc0" 00:08:12.033 }, 00:08:12.033 "method": "bdev_malloc_create" 00:08:12.033 }, 00:08:12.033 { 00:08:12.033 "params": { 00:08:12.033 "filename": "/dev/zram1", 00:08:12.033 "name": "uring0" 00:08:12.033 }, 00:08:12.033 "method": "bdev_uring_create" 00:08:12.033 }, 00:08:12.033 { 00:08:12.033 "method": "bdev_wait_for_examine" 00:08:12.033 } 00:08:12.033 ] 00:08:12.033 } 00:08:12.033 ] 00:08:12.033 } 00:08:12.033 [2024-12-07 22:39:26.757611] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.033 [2024-12-07 22:39:26.792315] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.292 [2024-12-07 22:39:26.820612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.226  [2024-12-07T22:39:29.370Z] Copying: 239/512 [MB] (239 MBps) [2024-12-07T22:39:29.370Z] Copying: 444/512 [MB] (205 MBps) [2024-12-07T22:39:29.629Z] Copying: 512/512 [MB] (average 223 MBps) 00:08:14.863 00:08:14.863 22:39:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:14.863 22:39:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:14.863 22:39:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:14.863 22:39:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:14.863 [2024-12-07 22:39:29.505976] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:14.863 [2024-12-07 22:39:29.506091] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73159 ] 00:08:14.863 { 00:08:14.863 "subsystems": [ 00:08:14.863 { 00:08:14.863 "subsystem": "bdev", 00:08:14.863 "config": [ 00:08:14.863 { 00:08:14.863 "params": { 00:08:14.863 "block_size": 512, 00:08:14.863 "num_blocks": 1048576, 00:08:14.863 "name": "malloc0" 00:08:14.863 }, 00:08:14.863 "method": "bdev_malloc_create" 00:08:14.863 }, 00:08:14.863 { 00:08:14.863 "params": { 00:08:14.863 "filename": "/dev/zram1", 00:08:14.863 "name": "uring0" 00:08:14.863 }, 00:08:14.863 "method": "bdev_uring_create" 00:08:14.863 }, 00:08:14.863 { 00:08:14.863 "method": "bdev_wait_for_examine" 00:08:14.863 } 00:08:14.863 ] 00:08:14.863 } 00:08:14.863 ] 00:08:14.863 } 00:08:15.121 [2024-12-07 22:39:29.642915] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.122 [2024-12-07 22:39:29.675588] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.122 [2024-12-07 22:39:29.703119] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.059  [2024-12-07T22:39:32.202Z] Copying: 177/512 [MB] (177 MBps) [2024-12-07T22:39:33.137Z] Copying: 336/512 [MB] (158 MBps) [2024-12-07T22:39:33.137Z] Copying: 494/512 [MB] (157 MBps) [2024-12-07T22:39:33.137Z] Copying: 512/512 [MB] (average 165 MBps) 00:08:18.371 00:08:18.630 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:18.630 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ ipia0aixt6y5jp1f9gt95ioc9qrkan5oh6awk66staxgrqih68nfxjjz3h27r1dpz4f7kxf9gsig364onf7xy5p62fzmq4kh1h9szhh3t8x4x7mhheh3mrsnm3f026da2a105z19dtohclzceaam7swauhr475nvljnj007s2rm87q40ywvkum68r5wrwsy76zapghpbdnky3cr5e8r94natf4eelz4mleenm3wxwm7hu4by2nlg7uoq4aletf63uhivuknnd29vyhl3rzs9xkkqff301prmrtaq1019bfx835i0ywxyxg9ulmg3q4g04szrn5dvqkweylpox0bxx9xgjer2ylig7dthuie6jp848zep2czpyux8letnf8i2ftx3ekdvmh967y3icnj6gi5n1sd8rg9denbgtpbf32rvar51bp3i7rxherwsf4m190tueuypbxjo3bx9zt4e502irftvwkbodbpf2b45y7w057oqprm3hpw3wcvb59fh03c3u3jkdtrib1potrgnnmr6j9hemhcschdqa6mn122bwn1iqrrhl5hkr9oo7l1huiup08ih7eqxk6s4gaimzdi2el0u90z6yx9yjgya9b4karol1l5z8z0y1l02cgq35ahh6zz0h8cywduzzt2b9c1iwstkji5p0beg9fty8bey74jbnz8ih53gkpxqmuynh4vhk21z2q0ra3jmoutfg2tkx5dkxkn24jivvn1xrq7vtwb47rkbabtb8fkfiu7zx94rhf1ql9c0347p0wu1p7h554pk7g1kjn5nmfxoswv4ni2z7j9ys173zwehq0cr5yyg8l6uybexm6l5kfm35xjoqnnyawztm17im4b6ah2cccx1nccvo9jwfpttukyuqtsx2yngqtnudda6tic3f8ici6cbhmzy5ngtnde1ed4ccp3e86f0x6ha3dy8nkyhih9fkh1e5v95egomct0dpoja1j165x4mvopm8svlbu36160j8q1htsnpzksdcu0g == \i\p\i\a\0\a\i\x\t\6\y\5\j\p\1\f\9\g\t\9\5\i\o\c\9\q\r\k\a\n\5\o\h\6\a\w\k\6\6\s\t\a\x\g\r\q\i\h\6\8\n\f\x\j\j\z\3\h\2\7\r\1\d\p\z\4\f\7\k\x\f\9\g\s\i\g\3\6\4\o\n\f\7\x\y\5\p\6\2\f\z\m\q\4\k\h\1\h\9\s\z\h\h\3\t\8\x\4\x\7\m\h\h\e\h\3\m\r\s\n\m\3\f\0\2\6\d\a\2\a\1\0\5\z\1\9\d\t\o\h\c\l\z\c\e\a\a\m\7\s\w\a\u\h\r\4\7\5\n\v\l\j\n\j\0\0\7\s\2\r\m\8\7\q\4\0\y\w\v\k\u\m\6\8\r\5\w\r\w\s\y\7\6\z\a\p\g\h\p\b\d\n\k\y\3\c\r\5\e\8\r\9\4\n\a\t\f\4\e\e\l\z\4\m\l\e\e\n\m\3\w\x\w\m\7\h\u\4\b\y\2\n\l\g\7\u\o\q\4\a\l\e\t\f\6\3\u\h\i\v\u\k\n\n\d\2\9\v\y\h\l\3\r\z\s\9\x\k\k\q\f\f\3\0\1\p\r\m\r\t\a\q\1\0\1\9\b\f\x\8\3\5\i\0\y\w\x\y\x\g\9\u\l\m\g\3\q\4\g\0\4\s\z\r\n\5\d\v\q\k\w\e\y\l\p\o\x\0\b\x\x\9\x\g\j\e\r\2\y\l\i\g\7\d\t\h\u\i\e\6\j\p\8\4\8\z\e\p\2\c\z\p\y\u\x\8\l\e\t\n\f\8\i\2\f\t\x\3\e\k\d\v\m\h\9\6\7\y\3\i\c\n\j\6\g\i\5\n\1\s\d\8\r\g\9\d\e\n\b\g\t\p\b\f\3\2\r\v\a\r\5\1\b\p\3\i\7\r\x\h\e\r\w\s\f\4\m\1\9\0\t\u\e\u\y\p\b\x\j\o\3\b\x\9\z\t\4\e\5\0\2\i\r\f\t\v\w\k\b\o\d\b\p\f\2\b\4\5\y\7\w\0\5\7\o\q\p\r\m\3\h\p\w\3\w\c\v\b\5\9\f\h\0\3\c\3\u\3\j\k\d\t\r\i\b\1\p\o\t\r\g\n\n\m\r\6\j\9\h\e\m\h\c\s\c\h\d\q\a\6\m\n\1\2\2\b\w\n\1\i\q\r\r\h\l\5\h\k\r\9\o\o\7\l\1\h\u\i\u\p\0\8\i\h\7\e\q\x\k\6\s\4\g\a\i\m\z\d\i\2\e\l\0\u\9\0\z\6\y\x\9\y\j\g\y\a\9\b\4\k\a\r\o\l\1\l\5\z\8\z\0\y\1\l\0\2\c\g\q\3\5\a\h\h\6\z\z\0\h\8\c\y\w\d\u\z\z\t\2\b\9\c\1\i\w\s\t\k\j\i\5\p\0\b\e\g\9\f\t\y\8\b\e\y\7\4\j\b\n\z\8\i\h\5\3\g\k\p\x\q\m\u\y\n\h\4\v\h\k\2\1\z\2\q\0\r\a\3\j\m\o\u\t\f\g\2\t\k\x\5\d\k\x\k\n\2\4\j\i\v\v\n\1\x\r\q\7\v\t\w\b\4\7\r\k\b\a\b\t\b\8\f\k\f\i\u\7\z\x\9\4\r\h\f\1\q\l\9\c\0\3\4\7\p\0\w\u\1\p\7\h\5\5\4\p\k\7\g\1\k\j\n\5\n\m\f\x\o\s\w\v\4\n\i\2\z\7\j\9\y\s\1\7\3\z\w\e\h\q\0\c\r\5\y\y\g\8\l\6\u\y\b\e\x\m\6\l\5\k\f\m\3\5\x\j\o\q\n\n\y\a\w\z\t\m\1\7\i\m\4\b\6\a\h\2\c\c\c\x\1\n\c\c\v\o\9\j\w\f\p\t\t\u\k\y\u\q\t\s\x\2\y\n\g\q\t\n\u\d\d\a\6\t\i\c\3\f\8\i\c\i\6\c\b\h\m\z\y\5\n\g\t\n\d\e\1\e\d\4\c\c\p\3\e\8\6\f\0\x\6\h\a\3\d\y\8\n\k\y\h\i\h\9\f\k\h\1\e\5\v\9\5\e\g\o\m\c\t\0\d\p\o\j\a\1\j\1\6\5\x\4\m\v\o\p\m\8\s\v\l\b\u\3\6\1\6\0\j\8\q\1\h\t\s\n\p\z\k\s\d\c\u\0\g ]] 00:08:18.630 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:18.631 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ ipia0aixt6y5jp1f9gt95ioc9qrkan5oh6awk66staxgrqih68nfxjjz3h27r1dpz4f7kxf9gsig364onf7xy5p62fzmq4kh1h9szhh3t8x4x7mhheh3mrsnm3f026da2a105z19dtohclzceaam7swauhr475nvljnj007s2rm87q40ywvkum68r5wrwsy76zapghpbdnky3cr5e8r94natf4eelz4mleenm3wxwm7hu4by2nlg7uoq4aletf63uhivuknnd29vyhl3rzs9xkkqff301prmrtaq1019bfx835i0ywxyxg9ulmg3q4g04szrn5dvqkweylpox0bxx9xgjer2ylig7dthuie6jp848zep2czpyux8letnf8i2ftx3ekdvmh967y3icnj6gi5n1sd8rg9denbgtpbf32rvar51bp3i7rxherwsf4m190tueuypbxjo3bx9zt4e502irftvwkbodbpf2b45y7w057oqprm3hpw3wcvb59fh03c3u3jkdtrib1potrgnnmr6j9hemhcschdqa6mn122bwn1iqrrhl5hkr9oo7l1huiup08ih7eqxk6s4gaimzdi2el0u90z6yx9yjgya9b4karol1l5z8z0y1l02cgq35ahh6zz0h8cywduzzt2b9c1iwstkji5p0beg9fty8bey74jbnz8ih53gkpxqmuynh4vhk21z2q0ra3jmoutfg2tkx5dkxkn24jivvn1xrq7vtwb47rkbabtb8fkfiu7zx94rhf1ql9c0347p0wu1p7h554pk7g1kjn5nmfxoswv4ni2z7j9ys173zwehq0cr5yyg8l6uybexm6l5kfm35xjoqnnyawztm17im4b6ah2cccx1nccvo9jwfpttukyuqtsx2yngqtnudda6tic3f8ici6cbhmzy5ngtnde1ed4ccp3e86f0x6ha3dy8nkyhih9fkh1e5v95egomct0dpoja1j165x4mvopm8svlbu36160j8q1htsnpzksdcu0g == \i\p\i\a\0\a\i\x\t\6\y\5\j\p\1\f\9\g\t\9\5\i\o\c\9\q\r\k\a\n\5\o\h\6\a\w\k\6\6\s\t\a\x\g\r\q\i\h\6\8\n\f\x\j\j\z\3\h\2\7\r\1\d\p\z\4\f\7\k\x\f\9\g\s\i\g\3\6\4\o\n\f\7\x\y\5\p\6\2\f\z\m\q\4\k\h\1\h\9\s\z\h\h\3\t\8\x\4\x\7\m\h\h\e\h\3\m\r\s\n\m\3\f\0\2\6\d\a\2\a\1\0\5\z\1\9\d\t\o\h\c\l\z\c\e\a\a\m\7\s\w\a\u\h\r\4\7\5\n\v\l\j\n\j\0\0\7\s\2\r\m\8\7\q\4\0\y\w\v\k\u\m\6\8\r\5\w\r\w\s\y\7\6\z\a\p\g\h\p\b\d\n\k\y\3\c\r\5\e\8\r\9\4\n\a\t\f\4\e\e\l\z\4\m\l\e\e\n\m\3\w\x\w\m\7\h\u\4\b\y\2\n\l\g\7\u\o\q\4\a\l\e\t\f\6\3\u\h\i\v\u\k\n\n\d\2\9\v\y\h\l\3\r\z\s\9\x\k\k\q\f\f\3\0\1\p\r\m\r\t\a\q\1\0\1\9\b\f\x\8\3\5\i\0\y\w\x\y\x\g\9\u\l\m\g\3\q\4\g\0\4\s\z\r\n\5\d\v\q\k\w\e\y\l\p\o\x\0\b\x\x\9\x\g\j\e\r\2\y\l\i\g\7\d\t\h\u\i\e\6\j\p\8\4\8\z\e\p\2\c\z\p\y\u\x\8\l\e\t\n\f\8\i\2\f\t\x\3\e\k\d\v\m\h\9\6\7\y\3\i\c\n\j\6\g\i\5\n\1\s\d\8\r\g\9\d\e\n\b\g\t\p\b\f\3\2\r\v\a\r\5\1\b\p\3\i\7\r\x\h\e\r\w\s\f\4\m\1\9\0\t\u\e\u\y\p\b\x\j\o\3\b\x\9\z\t\4\e\5\0\2\i\r\f\t\v\w\k\b\o\d\b\p\f\2\b\4\5\y\7\w\0\5\7\o\q\p\r\m\3\h\p\w\3\w\c\v\b\5\9\f\h\0\3\c\3\u\3\j\k\d\t\r\i\b\1\p\o\t\r\g\n\n\m\r\6\j\9\h\e\m\h\c\s\c\h\d\q\a\6\m\n\1\2\2\b\w\n\1\i\q\r\r\h\l\5\h\k\r\9\o\o\7\l\1\h\u\i\u\p\0\8\i\h\7\e\q\x\k\6\s\4\g\a\i\m\z\d\i\2\e\l\0\u\9\0\z\6\y\x\9\y\j\g\y\a\9\b\4\k\a\r\o\l\1\l\5\z\8\z\0\y\1\l\0\2\c\g\q\3\5\a\h\h\6\z\z\0\h\8\c\y\w\d\u\z\z\t\2\b\9\c\1\i\w\s\t\k\j\i\5\p\0\b\e\g\9\f\t\y\8\b\e\y\7\4\j\b\n\z\8\i\h\5\3\g\k\p\x\q\m\u\y\n\h\4\v\h\k\2\1\z\2\q\0\r\a\3\j\m\o\u\t\f\g\2\t\k\x\5\d\k\x\k\n\2\4\j\i\v\v\n\1\x\r\q\7\v\t\w\b\4\7\r\k\b\a\b\t\b\8\f\k\f\i\u\7\z\x\9\4\r\h\f\1\q\l\9\c\0\3\4\7\p\0\w\u\1\p\7\h\5\5\4\p\k\7\g\1\k\j\n\5\n\m\f\x\o\s\w\v\4\n\i\2\z\7\j\9\y\s\1\7\3\z\w\e\h\q\0\c\r\5\y\y\g\8\l\6\u\y\b\e\x\m\6\l\5\k\f\m\3\5\x\j\o\q\n\n\y\a\w\z\t\m\1\7\i\m\4\b\6\a\h\2\c\c\c\x\1\n\c\c\v\o\9\j\w\f\p\t\t\u\k\y\u\q\t\s\x\2\y\n\g\q\t\n\u\d\d\a\6\t\i\c\3\f\8\i\c\i\6\c\b\h\m\z\y\5\n\g\t\n\d\e\1\e\d\4\c\c\p\3\e\8\6\f\0\x\6\h\a\3\d\y\8\n\k\y\h\i\h\9\f\k\h\1\e\5\v\9\5\e\g\o\m\c\t\0\d\p\o\j\a\1\j\1\6\5\x\4\m\v\o\p\m\8\s\v\l\b\u\3\6\1\6\0\j\8\q\1\h\t\s\n\p\z\k\s\d\c\u\0\g ]] 00:08:18.631 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:18.890 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:18.890 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:18.890 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:18.890 22:39:33 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:18.890 [2024-12-07 22:39:33.531499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:18.890 [2024-12-07 22:39:33.531586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73215 ] 00:08:18.890 { 00:08:18.890 "subsystems": [ 00:08:18.890 { 00:08:18.890 "subsystem": "bdev", 00:08:18.890 "config": [ 00:08:18.890 { 00:08:18.890 "params": { 00:08:18.890 "block_size": 512, 00:08:18.890 "num_blocks": 1048576, 00:08:18.890 "name": "malloc0" 00:08:18.890 }, 00:08:18.890 "method": "bdev_malloc_create" 00:08:18.890 }, 00:08:18.890 { 00:08:18.890 "params": { 00:08:18.890 "filename": "/dev/zram1", 00:08:18.890 "name": "uring0" 00:08:18.890 }, 00:08:18.890 "method": "bdev_uring_create" 00:08:18.890 }, 00:08:18.890 { 00:08:18.890 "method": "bdev_wait_for_examine" 00:08:18.890 } 00:08:18.890 ] 00:08:18.890 } 00:08:18.890 ] 00:08:18.890 } 00:08:19.150 [2024-12-07 22:39:33.664110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.150 [2024-12-07 22:39:33.704385] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.150 [2024-12-07 22:39:33.737625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.529  [2024-12-07T22:39:36.231Z] Copying: 164/512 [MB] (164 MBps) [2024-12-07T22:39:37.248Z] Copying: 302/512 [MB] (138 MBps) [2024-12-07T22:39:37.508Z] Copying: 441/512 [MB] (138 MBps) [2024-12-07T22:39:37.766Z] Copying: 512/512 [MB] (average 145 MBps) 00:08:23.000 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:23.001 22:39:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:23.001 [2024-12-07 22:39:37.691639] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:23.001 [2024-12-07 22:39:37.691781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73271 ] 00:08:23.001 { 00:08:23.001 "subsystems": [ 00:08:23.001 { 00:08:23.001 "subsystem": "bdev", 00:08:23.001 "config": [ 00:08:23.001 { 00:08:23.001 "params": { 00:08:23.001 "block_size": 512, 00:08:23.001 "num_blocks": 1048576, 00:08:23.001 "name": "malloc0" 00:08:23.001 }, 00:08:23.001 "method": "bdev_malloc_create" 00:08:23.001 }, 00:08:23.001 { 00:08:23.001 "params": { 00:08:23.001 "filename": "/dev/zram1", 00:08:23.001 "name": "uring0" 00:08:23.001 }, 00:08:23.001 "method": "bdev_uring_create" 00:08:23.001 }, 00:08:23.001 { 00:08:23.001 "params": { 00:08:23.001 "name": "uring0" 00:08:23.001 }, 00:08:23.001 "method": "bdev_uring_delete" 00:08:23.001 }, 00:08:23.001 { 00:08:23.001 "method": "bdev_wait_for_examine" 00:08:23.001 } 00:08:23.001 ] 00:08:23.001 } 00:08:23.001 ] 00:08:23.001 } 00:08:23.260 [2024-12-07 22:39:37.831752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.260 [2024-12-07 22:39:37.877095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.260 [2024-12-07 22:39:37.912977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.519  [2024-12-07T22:39:38.544Z] Copying: 0/0 [B] (average 0 Bps) 00:08:23.778 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:23.778 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:23.778 [2024-12-07 22:39:38.351003] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:23.778 [2024-12-07 22:39:38.351102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73301 ] 00:08:23.778 { 00:08:23.778 "subsystems": [ 00:08:23.778 { 00:08:23.778 "subsystem": "bdev", 00:08:23.778 "config": [ 00:08:23.778 { 00:08:23.778 "params": { 00:08:23.778 "block_size": 512, 00:08:23.778 "num_blocks": 1048576, 00:08:23.778 "name": "malloc0" 00:08:23.778 }, 00:08:23.778 "method": "bdev_malloc_create" 00:08:23.778 }, 00:08:23.778 { 00:08:23.778 "params": { 00:08:23.778 "filename": "/dev/zram1", 00:08:23.778 "name": "uring0" 00:08:23.778 }, 00:08:23.778 "method": "bdev_uring_create" 00:08:23.778 }, 00:08:23.778 { 00:08:23.778 "params": { 00:08:23.778 "name": "uring0" 00:08:23.778 }, 00:08:23.778 "method": "bdev_uring_delete" 00:08:23.778 }, 00:08:23.778 { 00:08:23.778 "method": "bdev_wait_for_examine" 00:08:23.778 } 00:08:23.778 ] 00:08:23.778 } 00:08:23.778 ] 00:08:23.778 } 00:08:23.778 [2024-12-07 22:39:38.488271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.778 [2024-12-07 22:39:38.534722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.038 [2024-12-07 22:39:38.572303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.038 [2024-12-07 22:39:38.700964] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:24.038 [2024-12-07 22:39:38.701042] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:24.038 [2024-12-07 22:39:38.701069] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:24.038 [2024-12-07 22:39:38.701079] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:24.296 [2024-12-07 22:39:38.887221] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:24.296 22:39:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:24.555 00:08:24.555 real 0m13.726s 00:08:24.555 user 0m9.440s 00:08:24.555 sys 0m11.598s 00:08:24.555 22:39:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.555 22:39:39 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:24.555 ************************************ 00:08:24.555 END TEST dd_uring_copy 00:08:24.555 ************************************ 00:08:24.814 00:08:24.814 real 0m14.006s 00:08:24.814 user 0m9.603s 00:08:24.814 sys 0m11.721s 00:08:24.814 22:39:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:24.814 22:39:39 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:24.814 ************************************ 00:08:24.814 END TEST spdk_dd_uring 00:08:24.814 ************************************ 00:08:24.814 22:39:39 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:24.814 22:39:39 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:24.814 22:39:39 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:24.814 22:39:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:24.814 ************************************ 00:08:24.814 START TEST spdk_dd_sparse 00:08:24.814 ************************************ 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:24.814 * Looking for test storage... 00:08:24.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.814 --rc genhtml_branch_coverage=1 00:08:24.814 --rc genhtml_function_coverage=1 00:08:24.814 --rc genhtml_legend=1 00:08:24.814 --rc geninfo_all_blocks=1 00:08:24.814 --rc geninfo_unexecuted_blocks=1 00:08:24.814 00:08:24.814 ' 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.814 --rc genhtml_branch_coverage=1 00:08:24.814 --rc genhtml_function_coverage=1 00:08:24.814 --rc genhtml_legend=1 00:08:24.814 --rc geninfo_all_blocks=1 00:08:24.814 --rc geninfo_unexecuted_blocks=1 00:08:24.814 00:08:24.814 ' 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.814 --rc genhtml_branch_coverage=1 00:08:24.814 --rc genhtml_function_coverage=1 00:08:24.814 --rc genhtml_legend=1 00:08:24.814 --rc geninfo_all_blocks=1 00:08:24.814 --rc geninfo_unexecuted_blocks=1 00:08:24.814 00:08:24.814 ' 00:08:24.814 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:24.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.815 --rc genhtml_branch_coverage=1 00:08:24.815 --rc genhtml_function_coverage=1 00:08:24.815 --rc genhtml_legend=1 00:08:24.815 --rc geninfo_all_blocks=1 00:08:24.815 --rc geninfo_unexecuted_blocks=1 00:08:24.815 00:08:24.815 ' 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:24.815 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:25.074 1+0 records in 00:08:25.074 1+0 records out 00:08:25.074 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00542185 s, 774 MB/s 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:25.074 1+0 records in 00:08:25.074 1+0 records out 00:08:25.074 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00497455 s, 843 MB/s 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:25.074 1+0 records in 00:08:25.074 1+0 records out 00:08:25.074 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00585209 s, 717 MB/s 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:25.074 ************************************ 00:08:25.074 START TEST dd_sparse_file_to_file 00:08:25.074 ************************************ 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:25.074 22:39:39 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:25.074 [2024-12-07 22:39:39.675694] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:25.074 [2024-12-07 22:39:39.675973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73395 ] 00:08:25.074 { 00:08:25.074 "subsystems": [ 00:08:25.074 { 00:08:25.074 "subsystem": "bdev", 00:08:25.074 "config": [ 00:08:25.074 { 00:08:25.074 "params": { 00:08:25.074 "block_size": 4096, 00:08:25.074 "filename": "dd_sparse_aio_disk", 00:08:25.074 "name": "dd_aio" 00:08:25.074 }, 00:08:25.074 "method": "bdev_aio_create" 00:08:25.074 }, 00:08:25.074 { 00:08:25.074 "params": { 00:08:25.074 "lvs_name": "dd_lvstore", 00:08:25.074 "bdev_name": "dd_aio" 00:08:25.074 }, 00:08:25.074 "method": "bdev_lvol_create_lvstore" 00:08:25.074 }, 00:08:25.074 { 00:08:25.074 "method": "bdev_wait_for_examine" 00:08:25.074 } 00:08:25.074 ] 00:08:25.074 } 00:08:25.074 ] 00:08:25.074 } 00:08:25.074 [2024-12-07 22:39:39.813144] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.333 [2024-12-07 22:39:39.854005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.333 [2024-12-07 22:39:39.886591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.333  [2024-12-07T22:39:40.357Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:25.591 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:25.591 ************************************ 00:08:25.591 END TEST dd_sparse_file_to_file 00:08:25.591 ************************************ 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:25.591 00:08:25.591 real 0m0.537s 00:08:25.591 user 0m0.324s 00:08:25.591 sys 0m0.263s 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:25.591 22:39:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 ************************************ 00:08:25.592 START TEST dd_sparse_file_to_bdev 00:08:25.592 ************************************ 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:25.592 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:25.592 [2024-12-07 22:39:40.261350] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:25.592 [2024-12-07 22:39:40.261446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73443 ] 00:08:25.592 { 00:08:25.592 "subsystems": [ 00:08:25.592 { 00:08:25.592 "subsystem": "bdev", 00:08:25.592 "config": [ 00:08:25.592 { 00:08:25.592 "params": { 00:08:25.592 "block_size": 4096, 00:08:25.592 "filename": "dd_sparse_aio_disk", 00:08:25.592 "name": "dd_aio" 00:08:25.592 }, 00:08:25.592 "method": "bdev_aio_create" 00:08:25.592 }, 00:08:25.592 { 00:08:25.592 "params": { 00:08:25.592 "lvs_name": "dd_lvstore", 00:08:25.592 "lvol_name": "dd_lvol", 00:08:25.592 "size_in_mib": 36, 00:08:25.592 "thin_provision": true 00:08:25.592 }, 00:08:25.592 "method": "bdev_lvol_create" 00:08:25.592 }, 00:08:25.592 { 00:08:25.592 "method": "bdev_wait_for_examine" 00:08:25.592 } 00:08:25.592 ] 00:08:25.592 } 00:08:25.592 ] 00:08:25.592 } 00:08:25.851 [2024-12-07 22:39:40.399763] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.851 [2024-12-07 22:39:40.441352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.851 [2024-12-07 22:39:40.475137] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:25.851  [2024-12-07T22:39:40.876Z] Copying: 12/36 [MB] (average 571 MBps) 00:08:26.110 00:08:26.110 ************************************ 00:08:26.110 END TEST dd_sparse_file_to_bdev 00:08:26.110 ************************************ 00:08:26.110 00:08:26.110 real 0m0.521s 00:08:26.110 user 0m0.332s 00:08:26.110 sys 0m0.248s 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:26.110 ************************************ 00:08:26.110 START TEST dd_sparse_bdev_to_file 00:08:26.110 ************************************ 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:26.110 22:39:40 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:26.110 [2024-12-07 22:39:40.836178] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:26.110 [2024-12-07 22:39:40.836449] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73470 ] 00:08:26.110 { 00:08:26.110 "subsystems": [ 00:08:26.110 { 00:08:26.110 "subsystem": "bdev", 00:08:26.110 "config": [ 00:08:26.110 { 00:08:26.110 "params": { 00:08:26.110 "block_size": 4096, 00:08:26.110 "filename": "dd_sparse_aio_disk", 00:08:26.110 "name": "dd_aio" 00:08:26.110 }, 00:08:26.110 "method": "bdev_aio_create" 00:08:26.110 }, 00:08:26.110 { 00:08:26.110 "method": "bdev_wait_for_examine" 00:08:26.110 } 00:08:26.110 ] 00:08:26.110 } 00:08:26.110 ] 00:08:26.110 } 00:08:26.369 [2024-12-07 22:39:40.971770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.369 [2024-12-07 22:39:41.011233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.369 [2024-12-07 22:39:41.043572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.369  [2024-12-07T22:39:41.393Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:26.627 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:26.627 ************************************ 00:08:26.627 END TEST dd_sparse_bdev_to_file 00:08:26.627 ************************************ 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:26.627 00:08:26.627 real 0m0.556s 00:08:26.627 user 0m0.319s 00:08:26.627 sys 0m0.301s 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:26.627 22:39:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:26.885 22:39:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:26.885 22:39:41 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:26.885 ************************************ 00:08:26.885 END TEST spdk_dd_sparse 00:08:26.885 ************************************ 00:08:26.885 00:08:26.885 real 0m2.036s 00:08:26.885 user 0m1.164s 00:08:26.885 sys 0m1.029s 00:08:26.885 22:39:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:26.885 22:39:41 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:26.885 22:39:41 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:26.885 22:39:41 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:26.885 22:39:41 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:26.885 22:39:41 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:26.885 ************************************ 00:08:26.885 START TEST spdk_dd_negative 00:08:26.885 ************************************ 00:08:26.885 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:26.885 * Looking for test storage... 00:08:26.886 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:26.886 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.145 --rc genhtml_branch_coverage=1 00:08:27.145 --rc genhtml_function_coverage=1 00:08:27.145 --rc genhtml_legend=1 00:08:27.145 --rc geninfo_all_blocks=1 00:08:27.145 --rc geninfo_unexecuted_blocks=1 00:08:27.145 00:08:27.145 ' 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.145 --rc genhtml_branch_coverage=1 00:08:27.145 --rc genhtml_function_coverage=1 00:08:27.145 --rc genhtml_legend=1 00:08:27.145 --rc geninfo_all_blocks=1 00:08:27.145 --rc geninfo_unexecuted_blocks=1 00:08:27.145 00:08:27.145 ' 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.145 --rc genhtml_branch_coverage=1 00:08:27.145 --rc genhtml_function_coverage=1 00:08:27.145 --rc genhtml_legend=1 00:08:27.145 --rc geninfo_all_blocks=1 00:08:27.145 --rc geninfo_unexecuted_blocks=1 00:08:27.145 00:08:27.145 ' 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:27.145 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.145 --rc genhtml_branch_coverage=1 00:08:27.145 --rc genhtml_function_coverage=1 00:08:27.145 --rc genhtml_legend=1 00:08:27.145 --rc geninfo_all_blocks=1 00:08:27.145 --rc geninfo_unexecuted_blocks=1 00:08:27.145 00:08:27.145 ' 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:27.145 22:39:41 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.146 ************************************ 00:08:27.146 START TEST dd_invalid_arguments 00:08:27.146 ************************************ 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:27.146 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:27.146 00:08:27.146 CPU options: 00:08:27.146 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:27.146 (like [0,1,10]) 00:08:27.146 --lcores lcore to CPU mapping list. The list is in the format: 00:08:27.146 [<,lcores[@CPUs]>...] 00:08:27.146 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:27.146 Within the group, '-' is used for range separator, 00:08:27.146 ',' is used for single number separator. 00:08:27.146 '( )' can be omitted for single element group, 00:08:27.146 '@' can be omitted if cpus and lcores have the same value 00:08:27.146 --disable-cpumask-locks Disable CPU core lock files. 00:08:27.146 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:27.146 pollers in the app support interrupt mode) 00:08:27.146 -p, --main-core main (primary) core for DPDK 00:08:27.146 00:08:27.146 Configuration options: 00:08:27.146 -c, --config, --json JSON config file 00:08:27.146 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:27.146 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:27.146 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:27.146 --rpcs-allowed comma-separated list of permitted RPCS 00:08:27.146 --json-ignore-init-errors don't exit on invalid config entry 00:08:27.146 00:08:27.146 Memory options: 00:08:27.146 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:27.146 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:27.146 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:27.146 -R, --huge-unlink unlink huge files after initialization 00:08:27.146 -n, --mem-channels number of memory channels used for DPDK 00:08:27.146 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:27.146 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:27.146 --no-huge run without using hugepages 00:08:27.146 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:27.146 -i, --shm-id shared memory ID (optional) 00:08:27.146 -g, --single-file-segments force creating just one hugetlbfs file 00:08:27.146 00:08:27.146 PCI options: 00:08:27.146 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:27.146 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:27.146 -u, --no-pci disable PCI access 00:08:27.146 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:27.146 00:08:27.146 Log options: 00:08:27.146 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:27.146 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:27.146 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:27.146 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:27.146 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:27.146 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:27.146 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:27.146 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:27.146 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:27.146 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:27.146 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:27.146 --silence-noticelog disable notice level logging to stderr 00:08:27.146 00:08:27.146 Trace options: 00:08:27.146 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:27.146 setting 0 to disable trace (default 32768) 00:08:27.146 Tracepoints vary in size and can use more than one trace entry. 00:08:27.146 -e, --tpoint-group [:] 00:08:27.146 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:27.146 [2024-12-07 22:39:41.744574] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:27.146 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:27.146 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:27.146 bdev_raid, all). 00:08:27.146 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:27.146 a tracepoint group. First tpoint inside a group can be enabled by 00:08:27.146 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:27.146 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:27.146 in /include/spdk_internal/trace_defs.h 00:08:27.146 00:08:27.146 Other options: 00:08:27.146 -h, --help show this usage 00:08:27.146 -v, --version print SPDK version 00:08:27.146 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:27.146 --env-context Opaque context for use of the env implementation 00:08:27.146 00:08:27.146 Application specific: 00:08:27.146 [--------- DD Options ---------] 00:08:27.146 --if Input file. Must specify either --if or --ib. 00:08:27.146 --ib Input bdev. Must specifier either --if or --ib 00:08:27.146 --of Output file. Must specify either --of or --ob. 00:08:27.146 --ob Output bdev. Must specify either --of or --ob. 00:08:27.146 --iflag Input file flags. 00:08:27.146 --oflag Output file flags. 00:08:27.146 --bs I/O unit size (default: 4096) 00:08:27.146 --qd Queue depth (default: 2) 00:08:27.146 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:27.146 --skip Skip this many I/O units at start of input. (default: 0) 00:08:27.146 --seek Skip this many I/O units at start of output. (default: 0) 00:08:27.146 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:27.146 --sparse Enable hole skipping in input target 00:08:27.146 Available iflag and oflag values: 00:08:27.146 append - append mode 00:08:27.146 direct - use direct I/O for data 00:08:27.146 directory - fail unless a directory 00:08:27.146 dsync - use synchronized I/O for data 00:08:27.146 noatime - do not update access time 00:08:27.146 noctty - do not assign controlling terminal from file 00:08:27.146 nofollow - do not follow symlinks 00:08:27.146 nonblock - use non-blocking I/O 00:08:27.146 sync - use synchronized I/O for data and metadata 00:08:27.146 ************************************ 00:08:27.146 END TEST dd_invalid_arguments 00:08:27.146 ************************************ 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.146 00:08:27.146 real 0m0.080s 00:08:27.146 user 0m0.049s 00:08:27.146 sys 0m0.028s 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.146 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.146 ************************************ 00:08:27.146 START TEST dd_double_input 00:08:27.146 ************************************ 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:27.147 [2024-12-07 22:39:41.879148] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.147 00:08:27.147 real 0m0.078s 00:08:27.147 user 0m0.043s 00:08:27.147 sys 0m0.033s 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.147 ************************************ 00:08:27.147 END TEST dd_double_input 00:08:27.147 ************************************ 00:08:27.147 22:39:41 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 ************************************ 00:08:27.406 START TEST dd_double_output 00:08:27.406 ************************************ 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.406 22:39:41 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:27.406 [2024-12-07 22:39:42.000882] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.406 00:08:27.406 real 0m0.066s 00:08:27.406 user 0m0.038s 00:08:27.406 sys 0m0.027s 00:08:27.406 ************************************ 00:08:27.406 END TEST dd_double_output 00:08:27.406 ************************************ 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 ************************************ 00:08:27.406 START TEST dd_no_input 00:08:27.406 ************************************ 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:27.406 [2024-12-07 22:39:42.122628] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.406 ************************************ 00:08:27.406 END TEST dd_no_input 00:08:27.406 ************************************ 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.406 00:08:27.406 real 0m0.070s 00:08:27.406 user 0m0.042s 00:08:27.406 sys 0m0.027s 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.406 22:39:42 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.665 ************************************ 00:08:27.665 START TEST dd_no_output 00:08:27.665 ************************************ 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.665 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:27.665 [2024-12-07 22:39:42.245538] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.666 00:08:27.666 real 0m0.078s 00:08:27.666 user 0m0.049s 00:08:27.666 sys 0m0.028s 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.666 ************************************ 00:08:27.666 END TEST dd_no_output 00:08:27.666 ************************************ 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 ************************************ 00:08:27.666 START TEST dd_wrong_blocksize 00:08:27.666 ************************************ 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:27.666 [2024-12-07 22:39:42.371794] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.666 00:08:27.666 real 0m0.074s 00:08:27.666 user 0m0.049s 00:08:27.666 sys 0m0.024s 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:27.666 ************************************ 00:08:27.666 END TEST dd_wrong_blocksize 00:08:27.666 ************************************ 00:08:27.666 22:39:42 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:27.926 ************************************ 00:08:27.926 START TEST dd_smaller_blocksize 00:08:27.926 ************************************ 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:27.926 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:27.926 [2024-12-07 22:39:42.499237] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:27.926 [2024-12-07 22:39:42.499343] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73702 ] 00:08:27.926 [2024-12-07 22:39:42.638670] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.926 [2024-12-07 22:39:42.677022] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.185 [2024-12-07 22:39:42.709837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.185 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:28.185 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:28.185 [2024-12-07 22:39:42.727870] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:28.185 [2024-12-07 22:39:42.727896] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.185 [2024-12-07 22:39:42.795547] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.185 00:08:28.185 real 0m0.450s 00:08:28.185 user 0m0.233s 00:08:28.185 sys 0m0.112s 00:08:28.185 ************************************ 00:08:28.185 END TEST dd_smaller_blocksize 00:08:28.185 ************************************ 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.185 ************************************ 00:08:28.185 START TEST dd_invalid_count 00:08:28.185 ************************************ 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.185 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.445 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.445 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.445 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.445 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.445 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.445 22:39:42 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:28.445 [2024-12-07 22:39:43.004193] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.445 00:08:28.445 real 0m0.077s 00:08:28.445 user 0m0.046s 00:08:28.445 sys 0m0.030s 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.445 ************************************ 00:08:28.445 END TEST dd_invalid_count 00:08:28.445 ************************************ 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.445 ************************************ 00:08:28.445 START TEST dd_invalid_oflag 00:08:28.445 ************************************ 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:28.445 [2024-12-07 22:39:43.124955] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.445 00:08:28.445 real 0m0.074s 00:08:28.445 user 0m0.045s 00:08:28.445 sys 0m0.029s 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:28.445 ************************************ 00:08:28.445 END TEST dd_invalid_oflag 00:08:28.445 ************************************ 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.445 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.445 ************************************ 00:08:28.445 START TEST dd_invalid_iflag 00:08:28.446 ************************************ 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.446 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:28.726 [2024-12-07 22:39:43.248787] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.726 00:08:28.726 real 0m0.078s 00:08:28.726 user 0m0.045s 00:08:28.726 sys 0m0.031s 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:28.726 ************************************ 00:08:28.726 END TEST dd_invalid_iflag 00:08:28.726 ************************************ 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:28.726 ************************************ 00:08:28.726 START TEST dd_unknown_flag 00:08:28.726 ************************************ 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.726 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:28.726 [2024-12-07 22:39:43.382525] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:28.726 [2024-12-07 22:39:43.382644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73794 ] 00:08:28.985 [2024-12-07 22:39:43.523464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.985 [2024-12-07 22:39:43.563004] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.985 [2024-12-07 22:39:43.595589] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:28.985 [2024-12-07 22:39:43.613313] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:28.985 [2024-12-07 22:39:43.613380] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.985 [2024-12-07 22:39:43.613445] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:28.985 [2024-12-07 22:39:43.613457] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.985 [2024-12-07 22:39:43.613759] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:28.985 [2024-12-07 22:39:43.613776] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.985 [2024-12-07 22:39:43.613820] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:28.985 [2024-12-07 22:39:43.613836] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:28.985 [2024-12-07 22:39:43.680050] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.243 00:08:29.243 real 0m0.436s 00:08:29.243 user 0m0.224s 00:08:29.243 sys 0m0.118s 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:29.243 ************************************ 00:08:29.243 END TEST dd_unknown_flag 00:08:29.243 ************************************ 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.243 ************************************ 00:08:29.243 START TEST dd_invalid_json 00:08:29.243 ************************************ 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:29.243 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.244 22:39:43 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:29.244 [2024-12-07 22:39:43.874079] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:29.244 [2024-12-07 22:39:43.874172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73817 ] 00:08:29.502 [2024-12-07 22:39:44.015992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.502 [2024-12-07 22:39:44.055971] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.502 [2024-12-07 22:39:44.056038] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:29.502 [2024-12-07 22:39:44.056052] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:29.502 [2024-12-07 22:39:44.056061] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.502 [2024-12-07 22:39:44.056109] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:29.502 00:08:29.502 real 0m0.329s 00:08:29.502 user 0m0.157s 00:08:29.502 sys 0m0.070s 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:29.502 ************************************ 00:08:29.502 END TEST dd_invalid_json 00:08:29.502 ************************************ 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:29.502 ************************************ 00:08:29.502 START TEST dd_invalid_seek 00:08:29.502 ************************************ 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.502 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:29.502 [2024-12-07 22:39:44.259144] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:29.502 [2024-12-07 22:39:44.259237] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73852 ] 00:08:29.502 { 00:08:29.502 "subsystems": [ 00:08:29.502 { 00:08:29.502 "subsystem": "bdev", 00:08:29.502 "config": [ 00:08:29.502 { 00:08:29.502 "params": { 00:08:29.502 "block_size": 512, 00:08:29.502 "num_blocks": 512, 00:08:29.502 "name": "malloc0" 00:08:29.502 }, 00:08:29.502 "method": "bdev_malloc_create" 00:08:29.502 }, 00:08:29.502 { 00:08:29.502 "params": { 00:08:29.502 "block_size": 512, 00:08:29.502 "num_blocks": 512, 00:08:29.502 "name": "malloc1" 00:08:29.502 }, 00:08:29.502 "method": "bdev_malloc_create" 00:08:29.502 }, 00:08:29.502 { 00:08:29.502 "method": "bdev_wait_for_examine" 00:08:29.502 } 00:08:29.502 ] 00:08:29.502 } 00:08:29.502 ] 00:08:29.502 } 00:08:29.760 [2024-12-07 22:39:44.396578] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.760 [2024-12-07 22:39:44.435325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.760 [2024-12-07 22:39:44.469033] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.760 [2024-12-07 22:39:44.512760] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:29.760 [2024-12-07 22:39:44.512831] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.018 [2024-12-07 22:39:44.580085] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.019 00:08:30.019 real 0m0.460s 00:08:30.019 user 0m0.302s 00:08:30.019 sys 0m0.120s 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:30.019 ************************************ 00:08:30.019 END TEST dd_invalid_seek 00:08:30.019 ************************************ 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.019 ************************************ 00:08:30.019 START TEST dd_invalid_skip 00:08:30.019 ************************************ 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.019 22:39:44 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:30.019 [2024-12-07 22:39:44.767453] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:30.019 [2024-12-07 22:39:44.767554] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73880 ] 00:08:30.019 { 00:08:30.019 "subsystems": [ 00:08:30.019 { 00:08:30.019 "subsystem": "bdev", 00:08:30.019 "config": [ 00:08:30.019 { 00:08:30.019 "params": { 00:08:30.019 "block_size": 512, 00:08:30.019 "num_blocks": 512, 00:08:30.019 "name": "malloc0" 00:08:30.019 }, 00:08:30.019 "method": "bdev_malloc_create" 00:08:30.019 }, 00:08:30.019 { 00:08:30.019 "params": { 00:08:30.019 "block_size": 512, 00:08:30.019 "num_blocks": 512, 00:08:30.019 "name": "malloc1" 00:08:30.019 }, 00:08:30.019 "method": "bdev_malloc_create" 00:08:30.019 }, 00:08:30.019 { 00:08:30.019 "method": "bdev_wait_for_examine" 00:08:30.019 } 00:08:30.019 ] 00:08:30.019 } 00:08:30.019 ] 00:08:30.019 } 00:08:30.278 [2024-12-07 22:39:44.908836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.279 [2024-12-07 22:39:44.955092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.279 [2024-12-07 22:39:44.990328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.279 [2024-12-07 22:39:45.035169] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:30.279 [2024-12-07 22:39:45.035238] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.536 [2024-12-07 22:39:45.106639] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:30.536 00:08:30.536 real 0m0.478s 00:08:30.536 user 0m0.308s 00:08:30.536 sys 0m0.127s 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.536 ************************************ 00:08:30.536 END TEST dd_invalid_skip 00:08:30.536 ************************************ 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 ************************************ 00:08:30.536 START TEST dd_invalid_input_count 00:08:30.536 ************************************ 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.536 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.537 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:30.537 { 00:08:30.537 "subsystems": [ 00:08:30.537 { 00:08:30.537 "subsystem": "bdev", 00:08:30.537 "config": [ 00:08:30.537 { 00:08:30.537 "params": { 00:08:30.537 "block_size": 512, 00:08:30.537 "num_blocks": 512, 00:08:30.537 "name": "malloc0" 00:08:30.537 }, 00:08:30.537 "method": "bdev_malloc_create" 00:08:30.537 }, 00:08:30.537 { 00:08:30.537 "params": { 00:08:30.537 "block_size": 512, 00:08:30.537 "num_blocks": 512, 00:08:30.537 "name": "malloc1" 00:08:30.537 }, 00:08:30.537 "method": "bdev_malloc_create" 00:08:30.537 }, 00:08:30.537 { 00:08:30.537 "method": "bdev_wait_for_examine" 00:08:30.537 } 00:08:30.537 ] 00:08:30.537 } 00:08:30.537 ] 00:08:30.537 } 00:08:30.537 [2024-12-07 22:39:45.299952] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:30.794 [2024-12-07 22:39:45.300083] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73919 ] 00:08:30.794 [2024-12-07 22:39:45.443080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.794 [2024-12-07 22:39:45.484716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.794 [2024-12-07 22:39:45.520304] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.052 [2024-12-07 22:39:45.565402] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:31.052 [2024-12-07 22:39:45.565515] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.052 [2024-12-07 22:39:45.639265] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:31.052 00:08:31.052 real 0m0.482s 00:08:31.052 user 0m0.318s 00:08:31.052 sys 0m0.126s 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.052 ************************************ 00:08:31.052 END TEST dd_invalid_input_count 00:08:31.052 ************************************ 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.052 ************************************ 00:08:31.052 START TEST dd_invalid_output_count 00:08:31.052 ************************************ 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.052 22:39:45 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:31.311 [2024-12-07 22:39:45.830384] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:31.311 [2024-12-07 22:39:45.830488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73947 ] 00:08:31.311 { 00:08:31.311 "subsystems": [ 00:08:31.311 { 00:08:31.311 "subsystem": "bdev", 00:08:31.311 "config": [ 00:08:31.311 { 00:08:31.311 "params": { 00:08:31.311 "block_size": 512, 00:08:31.311 "num_blocks": 512, 00:08:31.311 "name": "malloc0" 00:08:31.311 }, 00:08:31.311 "method": "bdev_malloc_create" 00:08:31.311 }, 00:08:31.311 { 00:08:31.311 "method": "bdev_wait_for_examine" 00:08:31.311 } 00:08:31.311 ] 00:08:31.311 } 00:08:31.311 ] 00:08:31.311 } 00:08:31.311 [2024-12-07 22:39:45.968619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.311 [2024-12-07 22:39:46.009949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.311 [2024-12-07 22:39:46.045285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.570 [2024-12-07 22:39:46.082517] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:31.570 [2024-12-07 22:39:46.082606] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.570 [2024-12-07 22:39:46.156034] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:31.570 00:08:31.570 real 0m0.472s 00:08:31.570 user 0m0.295s 00:08:31.570 sys 0m0.136s 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:31.570 ************************************ 00:08:31.570 END TEST dd_invalid_output_count 00:08:31.570 ************************************ 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:31.570 ************************************ 00:08:31.570 START TEST dd_bs_not_multiple 00:08:31.570 ************************************ 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:31.570 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.571 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.571 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.571 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.571 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:31.571 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.571 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.571 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:31.834 [2024-12-07 22:39:46.342661] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:31.834 [2024-12-07 22:39:46.343272] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73984 ] 00:08:31.834 { 00:08:31.835 "subsystems": [ 00:08:31.835 { 00:08:31.835 "subsystem": "bdev", 00:08:31.835 "config": [ 00:08:31.835 { 00:08:31.835 "params": { 00:08:31.835 "block_size": 512, 00:08:31.835 "num_blocks": 512, 00:08:31.835 "name": "malloc0" 00:08:31.835 }, 00:08:31.835 "method": "bdev_malloc_create" 00:08:31.835 }, 00:08:31.835 { 00:08:31.835 "params": { 00:08:31.835 "block_size": 512, 00:08:31.835 "num_blocks": 512, 00:08:31.835 "name": "malloc1" 00:08:31.835 }, 00:08:31.835 "method": "bdev_malloc_create" 00:08:31.835 }, 00:08:31.835 { 00:08:31.835 "method": "bdev_wait_for_examine" 00:08:31.835 } 00:08:31.835 ] 00:08:31.835 } 00:08:31.835 ] 00:08:31.835 } 00:08:31.835 [2024-12-07 22:39:46.472743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.835 [2024-12-07 22:39:46.507005] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.835 [2024-12-07 22:39:46.538949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.835 [2024-12-07 22:39:46.582194] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:31.835 [2024-12-07 22:39:46.582280] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.093 [2024-12-07 22:39:46.644356] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:32.093 00:08:32.093 real 0m0.423s 00:08:32.093 user 0m0.267s 00:08:32.093 sys 0m0.120s 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.093 ************************************ 00:08:32.093 END TEST dd_bs_not_multiple 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 ************************************ 00:08:32.093 00:08:32.093 real 0m5.293s 00:08:32.093 user 0m2.915s 00:08:32.093 sys 0m1.804s 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.093 22:39:46 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 ************************************ 00:08:32.093 END TEST spdk_dd_negative 00:08:32.093 ************************************ 00:08:32.093 00:08:32.093 real 1m4.240s 00:08:32.093 user 0m40.742s 00:08:32.093 sys 0m27.426s 00:08:32.093 22:39:46 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:32.093 22:39:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:32.093 ************************************ 00:08:32.093 END TEST spdk_dd 00:08:32.093 ************************************ 00:08:32.093 22:39:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:32.093 22:39:46 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:32.093 22:39:46 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:32.093 22:39:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:32.094 22:39:46 -- common/autotest_common.sh@10 -- # set +x 00:08:32.352 22:39:46 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:32.352 22:39:46 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:32.352 22:39:46 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:32.352 22:39:46 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:32.352 22:39:46 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:32.352 22:39:46 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:32.352 22:39:46 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:32.352 22:39:46 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.352 22:39:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.352 22:39:46 -- common/autotest_common.sh@10 -- # set +x 00:08:32.352 ************************************ 00:08:32.352 START TEST nvmf_tcp 00:08:32.353 ************************************ 00:08:32.353 22:39:46 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:32.353 * Looking for test storage... 00:08:32.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:32.353 22:39:46 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.353 22:39:46 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.353 22:39:46 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.353 22:39:47 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.353 --rc genhtml_branch_coverage=1 00:08:32.353 --rc genhtml_function_coverage=1 00:08:32.353 --rc genhtml_legend=1 00:08:32.353 --rc geninfo_all_blocks=1 00:08:32.353 --rc geninfo_unexecuted_blocks=1 00:08:32.353 00:08:32.353 ' 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.353 --rc genhtml_branch_coverage=1 00:08:32.353 --rc genhtml_function_coverage=1 00:08:32.353 --rc genhtml_legend=1 00:08:32.353 --rc geninfo_all_blocks=1 00:08:32.353 --rc geninfo_unexecuted_blocks=1 00:08:32.353 00:08:32.353 ' 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.353 --rc genhtml_branch_coverage=1 00:08:32.353 --rc genhtml_function_coverage=1 00:08:32.353 --rc genhtml_legend=1 00:08:32.353 --rc geninfo_all_blocks=1 00:08:32.353 --rc geninfo_unexecuted_blocks=1 00:08:32.353 00:08:32.353 ' 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.353 --rc genhtml_branch_coverage=1 00:08:32.353 --rc genhtml_function_coverage=1 00:08:32.353 --rc genhtml_legend=1 00:08:32.353 --rc geninfo_all_blocks=1 00:08:32.353 --rc geninfo_unexecuted_blocks=1 00:08:32.353 00:08:32.353 ' 00:08:32.353 22:39:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:32.353 22:39:47 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:32.353 22:39:47 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.353 22:39:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:32.353 ************************************ 00:08:32.353 START TEST nvmf_target_core 00:08:32.353 ************************************ 00:08:32.353 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:32.611 * Looking for test storage... 00:08:32.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.611 --rc genhtml_branch_coverage=1 00:08:32.611 --rc genhtml_function_coverage=1 00:08:32.611 --rc genhtml_legend=1 00:08:32.611 --rc geninfo_all_blocks=1 00:08:32.611 --rc geninfo_unexecuted_blocks=1 00:08:32.611 00:08:32.611 ' 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.611 --rc genhtml_branch_coverage=1 00:08:32.611 --rc genhtml_function_coverage=1 00:08:32.611 --rc genhtml_legend=1 00:08:32.611 --rc geninfo_all_blocks=1 00:08:32.611 --rc geninfo_unexecuted_blocks=1 00:08:32.611 00:08:32.611 ' 00:08:32.611 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.611 --rc genhtml_branch_coverage=1 00:08:32.612 --rc genhtml_function_coverage=1 00:08:32.612 --rc genhtml_legend=1 00:08:32.612 --rc geninfo_all_blocks=1 00:08:32.612 --rc geninfo_unexecuted_blocks=1 00:08:32.612 00:08:32.612 ' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.612 --rc genhtml_branch_coverage=1 00:08:32.612 --rc genhtml_function_coverage=1 00:08:32.612 --rc genhtml_legend=1 00:08:32.612 --rc geninfo_all_blocks=1 00:08:32.612 --rc geninfo_unexecuted_blocks=1 00:08:32.612 00:08:32.612 ' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.612 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.612 ************************************ 00:08:32.612 START TEST nvmf_host_management 00:08:32.612 ************************************ 00:08:32.612 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:32.612 * Looking for test storage... 00:08:32.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.871 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.872 --rc genhtml_branch_coverage=1 00:08:32.872 --rc genhtml_function_coverage=1 00:08:32.872 --rc genhtml_legend=1 00:08:32.872 --rc geninfo_all_blocks=1 00:08:32.872 --rc geninfo_unexecuted_blocks=1 00:08:32.872 00:08:32.872 ' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.872 --rc genhtml_branch_coverage=1 00:08:32.872 --rc genhtml_function_coverage=1 00:08:32.872 --rc genhtml_legend=1 00:08:32.872 --rc geninfo_all_blocks=1 00:08:32.872 --rc geninfo_unexecuted_blocks=1 00:08:32.872 00:08:32.872 ' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.872 --rc genhtml_branch_coverage=1 00:08:32.872 --rc genhtml_function_coverage=1 00:08:32.872 --rc genhtml_legend=1 00:08:32.872 --rc geninfo_all_blocks=1 00:08:32.872 --rc geninfo_unexecuted_blocks=1 00:08:32.872 00:08:32.872 ' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:32.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.872 --rc genhtml_branch_coverage=1 00:08:32.872 --rc genhtml_function_coverage=1 00:08:32.872 --rc genhtml_legend=1 00:08:32.872 --rc geninfo_all_blocks=1 00:08:32.872 --rc geninfo_unexecuted_blocks=1 00:08:32.872 00:08:32.872 ' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.872 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.872 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:32.873 Cannot find device "nvmf_init_br" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:32.873 Cannot find device "nvmf_init_br2" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:32.873 Cannot find device "nvmf_tgt_br" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.873 Cannot find device "nvmf_tgt_br2" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:32.873 Cannot find device "nvmf_init_br" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:32.873 Cannot find device "nvmf_init_br2" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:32.873 Cannot find device "nvmf_tgt_br" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:32.873 Cannot find device "nvmf_tgt_br2" 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:32.873 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:33.132 Cannot find device "nvmf_br" 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:33.132 Cannot find device "nvmf_init_if" 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:33.132 Cannot find device "nvmf_init_if2" 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:33.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:33.132 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:33.132 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:33.390 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:33.390 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:33.390 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:33.390 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:33.390 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:33.390 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:33.390 22:39:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:33.390 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:33.390 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:33.390 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:33.390 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:33.390 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:33.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:33.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.187 ms 00:08:33.390 00:08:33.390 --- 10.0.0.3 ping statistics --- 00:08:33.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.390 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:33.391 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:33.391 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:08:33.391 00:08:33.391 --- 10.0.0.4 ping statistics --- 00:08:33.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.391 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:33.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:33.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:08:33.391 00:08:33.391 --- 10.0.0.1 ping statistics --- 00:08:33.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.391 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:33.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:33.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:08:33.391 00:08:33.391 --- 10.0.0.2 ping statistics --- 00:08:33.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:33.391 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=74314 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 74314 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74314 ']' 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.391 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.391 [2024-12-07 22:39:48.132104] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:33.391 [2024-12-07 22:39:48.132207] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.649 [2024-12-07 22:39:48.271653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.649 [2024-12-07 22:39:48.320593] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.649 [2024-12-07 22:39:48.320651] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.649 [2024-12-07 22:39:48.320666] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:33.649 [2024-12-07 22:39:48.320675] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:33.649 [2024-12-07 22:39:48.320684] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.649 [2024-12-07 22:39:48.320898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.649 [2024-12-07 22:39:48.321264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.649 [2024-12-07 22:39:48.321454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:33.649 [2024-12-07 22:39:48.321466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.649 [2024-12-07 22:39:48.359793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.908 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.908 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:33.908 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:33.908 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.908 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.909 [2024-12-07 22:39:48.466541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.909 Malloc0 00:08:33.909 [2024-12-07 22:39:48.528473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=74366 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 74366 /var/tmp/bdevperf.sock 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74366 ']' 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:33.909 { 00:08:33.909 "params": { 00:08:33.909 "name": "Nvme$subsystem", 00:08:33.909 "trtype": "$TEST_TRANSPORT", 00:08:33.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:33.909 "adrfam": "ipv4", 00:08:33.909 "trsvcid": "$NVMF_PORT", 00:08:33.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:33.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:33.909 "hdgst": ${hdgst:-false}, 00:08:33.909 "ddgst": ${ddgst:-false} 00:08:33.909 }, 00:08:33.909 "method": "bdev_nvme_attach_controller" 00:08:33.909 } 00:08:33.909 EOF 00:08:33.909 )") 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:33.909 22:39:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:33.909 "params": { 00:08:33.909 "name": "Nvme0", 00:08:33.909 "trtype": "tcp", 00:08:33.909 "traddr": "10.0.0.3", 00:08:33.909 "adrfam": "ipv4", 00:08:33.909 "trsvcid": "4420", 00:08:33.909 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:33.909 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:33.909 "hdgst": false, 00:08:33.909 "ddgst": false 00:08:33.909 }, 00:08:33.909 "method": "bdev_nvme_attach_controller" 00:08:33.909 }' 00:08:33.909 [2024-12-07 22:39:48.629696] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:33.909 [2024-12-07 22:39:48.629804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74366 ] 00:08:34.168 [2024-12-07 22:39:48.766238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.168 [2024-12-07 22:39:48.808467] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.168 [2024-12-07 22:39:48.851271] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.442 Running I/O for 10 seconds... 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:34.442 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.715 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:34.715 [2024-12-07 22:39:49.400974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.715 [2024-12-07 22:39:49.401222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.715 [2024-12-07 22:39:49.401231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.716 [2024-12-07 22:39:49.401620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 22:39:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:34.716 [2024-12-07 22:39:49.401861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.401986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.401999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.716 [2024-12-07 22:39:49.402196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.716 [2024-12-07 22:39:49.402220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:34.717 [2024-12-07 22:39:49.402616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2370 is same with the state(6) to be set 00:08:34.717 [2024-12-07 22:39:49.402695] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8c2370 was disconnected and freed. reset controller. 00:08:34.717 [2024-12-07 22:39:49.402828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.717 [2024-12-07 22:39:49.402846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.717 [2024-12-07 22:39:49.402902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.717 [2024-12-07 22:39:49.402942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:34.717 [2024-12-07 22:39:49.402963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:34.717 [2024-12-07 22:39:49.402973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6aa860 is same with the state(6) to be set 00:08:34.717 [2024-12-07 22:39:49.404123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:34.717 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:34.717 00:08:34.717 Latency(us) 00:08:34.717 [2024-12-07T22:39:49.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.717 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:34.717 Job: Nvme0n1 ended in about 0.45 seconds with error 00:08:34.717 Verification LBA range: start 0x0 length 0x400 00:08:34.717 Nvme0n1 : 0.45 1418.33 88.65 141.83 0.00 39708.69 2532.07 39083.29 00:08:34.717 [2024-12-07T22:39:49.483Z] =================================================================================================================== 00:08:34.717 [2024-12-07T22:39:49.483Z] Total : 1418.33 88.65 141.83 0.00 39708.69 2532.07 39083.29 00:08:34.717 [2024-12-07 22:39:49.406472] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.717 [2024-12-07 22:39:49.406499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aa860 (9): Bad file descriptor 00:08:34.717 [2024-12-07 22:39:49.410939] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 74366 00:08:35.655 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (74366) - No such process 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:35.655 { 00:08:35.655 "params": { 00:08:35.655 "name": "Nvme$subsystem", 00:08:35.655 "trtype": "$TEST_TRANSPORT", 00:08:35.655 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.655 "adrfam": "ipv4", 00:08:35.655 "trsvcid": "$NVMF_PORT", 00:08:35.655 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.655 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.655 "hdgst": ${hdgst:-false}, 00:08:35.655 "ddgst": ${ddgst:-false} 00:08:35.655 }, 00:08:35.655 "method": "bdev_nvme_attach_controller" 00:08:35.655 } 00:08:35.655 EOF 00:08:35.655 )") 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:35.655 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:35.915 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:35.915 22:39:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:35.915 "params": { 00:08:35.915 "name": "Nvme0", 00:08:35.915 "trtype": "tcp", 00:08:35.915 "traddr": "10.0.0.3", 00:08:35.915 "adrfam": "ipv4", 00:08:35.915 "trsvcid": "4420", 00:08:35.915 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.915 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:35.915 "hdgst": false, 00:08:35.915 "ddgst": false 00:08:35.915 }, 00:08:35.915 "method": "bdev_nvme_attach_controller" 00:08:35.915 }' 00:08:35.915 [2024-12-07 22:39:50.467025] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:35.915 [2024-12-07 22:39:50.467170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74406 ] 00:08:35.915 [2024-12-07 22:39:50.611390] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.915 [2024-12-07 22:39:50.649636] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.174 [2024-12-07 22:39:50.689197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.174 Running I/O for 1 seconds... 00:08:37.113 1536.00 IOPS, 96.00 MiB/s 00:08:37.113 Latency(us) 00:08:37.113 [2024-12-07T22:39:51.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.113 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:37.113 Verification LBA range: start 0x0 length 0x400 00:08:37.113 Nvme0n1 : 1.02 1566.36 97.90 0.00 0.00 40099.06 3604.48 35508.60 00:08:37.113 [2024-12-07T22:39:51.879Z] =================================================================================================================== 00:08:37.113 [2024-12-07T22:39:51.879Z] Total : 1566.36 97.90 0.00 0.00 40099.06 3604.48 35508.60 00:08:37.373 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:37.373 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:37.373 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:37.373 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:37.373 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:37.373 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:37.373 22:39:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.373 rmmod nvme_tcp 00:08:37.373 rmmod nvme_fabrics 00:08:37.373 rmmod nvme_keyring 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 74314 ']' 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 74314 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 74314 ']' 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 74314 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74314 00:08:37.373 killing process with pid 74314 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74314' 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 74314 00:08:37.373 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 74314 00:08:37.632 [2024-12-07 22:39:52.258578] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:37.633 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:37.892 00:08:37.892 real 0m5.250s 00:08:37.892 user 0m18.086s 00:08:37.892 sys 0m1.450s 00:08:37.892 ************************************ 00:08:37.892 END TEST nvmf_host_management 00:08:37.892 ************************************ 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:37.892 ************************************ 00:08:37.892 START TEST nvmf_lvol 00:08:37.892 ************************************ 00:08:37.892 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:38.152 * Looking for test storage... 00:08:38.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.152 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.152 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.152 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.152 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.153 --rc genhtml_branch_coverage=1 00:08:38.153 --rc genhtml_function_coverage=1 00:08:38.153 --rc genhtml_legend=1 00:08:38.153 --rc geninfo_all_blocks=1 00:08:38.153 --rc geninfo_unexecuted_blocks=1 00:08:38.153 00:08:38.153 ' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.153 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.153 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:38.154 Cannot find device "nvmf_init_br" 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:38.154 Cannot find device "nvmf_init_br2" 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:38.154 Cannot find device "nvmf_tgt_br" 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.154 Cannot find device "nvmf_tgt_br2" 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:38.154 Cannot find device "nvmf_init_br" 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:38.154 Cannot find device "nvmf_init_br2" 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:38.154 Cannot find device "nvmf_tgt_br" 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:38.154 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:38.413 Cannot find device "nvmf_tgt_br2" 00:08:38.413 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:38.413 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:38.413 Cannot find device "nvmf_br" 00:08:38.413 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:38.413 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:38.413 Cannot find device "nvmf_init_if" 00:08:38.413 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:38.413 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:38.413 Cannot find device "nvmf_init_if2" 00:08:38.413 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.414 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.414 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:38.414 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.414 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:38.414 00:08:38.414 --- 10.0.0.3 ping statistics --- 00:08:38.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.414 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:38.414 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:38.414 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:08:38.414 00:08:38.414 --- 10.0.0.4 ping statistics --- 00:08:38.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.414 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.414 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.414 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:08:38.414 00:08:38.414 --- 10.0.0.1 ping statistics --- 00:08:38.414 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.414 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:08:38.414 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:38.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:08:38.677 00:08:38.677 --- 10.0.0.2 ping statistics --- 00:08:38.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.677 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=74670 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 74670 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:38.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 74670 ']' 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:38.677 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:38.677 [2024-12-07 22:39:53.263273] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:38.677 [2024-12-07 22:39:53.263615] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.677 [2024-12-07 22:39:53.402469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:38.937 [2024-12-07 22:39:53.445904] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.937 [2024-12-07 22:39:53.446294] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.937 [2024-12-07 22:39:53.446487] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.937 [2024-12-07 22:39:53.446826] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.937 [2024-12-07 22:39:53.446848] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.937 [2024-12-07 22:39:53.447021] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.937 [2024-12-07 22:39:53.447447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:38.937 [2024-12-07 22:39:53.447463] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.937 [2024-12-07 22:39:53.482170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.504 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:39.504 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:39.504 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:39.504 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:39.504 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:39.504 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.504 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:39.762 [2024-12-07 22:39:54.470457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.763 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.329 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:40.329 22:39:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:40.329 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:40.329 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:40.586 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:40.844 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=00d0b638-b8f1-43da-af5a-751798e46514 00:08:40.844 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 00d0b638-b8f1-43da-af5a-751798e46514 lvol 20 00:08:41.410 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d9cb6adf-314b-4512-8c1f-2a6eb0512a1d 00:08:41.410 22:39:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.410 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d9cb6adf-314b-4512-8c1f-2a6eb0512a1d 00:08:41.670 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:41.929 [2024-12-07 22:39:56.651461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:41.929 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:42.190 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=74751 00:08:42.190 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:42.190 22:39:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:43.571 22:39:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot d9cb6adf-314b-4512-8c1f-2a6eb0512a1d MY_SNAPSHOT 00:08:43.571 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=901defb3-1d9b-4e89-9b84-1aa4b6b8e3d9 00:08:43.571 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize d9cb6adf-314b-4512-8c1f-2a6eb0512a1d 30 00:08:43.831 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 901defb3-1d9b-4e89-9b84-1aa4b6b8e3d9 MY_CLONE 00:08:44.090 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0b7217fe-b287-4e02-ad4a-54f541b8d4a7 00:08:44.090 22:39:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0b7217fe-b287-4e02-ad4a-54f541b8d4a7 00:08:44.659 22:39:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 74751 00:08:52.777 Initializing NVMe Controllers 00:08:52.777 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:52.777 Controller IO queue size 128, less than required. 00:08:52.777 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:52.777 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:52.777 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:52.777 Initialization complete. Launching workers. 00:08:52.777 ======================================================== 00:08:52.777 Latency(us) 00:08:52.777 Device Information : IOPS MiB/s Average min max 00:08:52.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10501.84 41.02 12190.04 496.01 71431.63 00:08:52.777 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10457.34 40.85 12249.18 2204.81 57683.67 00:08:52.777 ======================================================== 00:08:52.777 Total : 20959.19 81.87 12219.55 496.01 71431.63 00:08:52.777 00:08:52.777 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:52.777 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d9cb6adf-314b-4512-8c1f-2a6eb0512a1d 00:08:53.036 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 00d0b638-b8f1-43da-af5a-751798e46514 00:08:53.337 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:53.337 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:53.337 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:53.337 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:53.337 22:40:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:53.337 rmmod nvme_tcp 00:08:53.337 rmmod nvme_fabrics 00:08:53.337 rmmod nvme_keyring 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 74670 ']' 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 74670 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 74670 ']' 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 74670 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.337 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74670 00:08:53.596 killing process with pid 74670 00:08:53.596 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.596 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.596 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74670' 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 74670 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 74670 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:53.597 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:53.856 00:08:53.856 real 0m15.925s 00:08:53.856 user 1m5.486s 00:08:53.856 sys 0m4.173s 00:08:53.856 ************************************ 00:08:53.856 END TEST nvmf_lvol 00:08:53.856 ************************************ 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.856 ************************************ 00:08:53.856 START TEST nvmf_lvs_grow 00:08:53.856 ************************************ 00:08:53.856 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:54.115 * Looking for test storage... 00:08:54.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:54.115 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:54.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.116 --rc genhtml_branch_coverage=1 00:08:54.116 --rc genhtml_function_coverage=1 00:08:54.116 --rc genhtml_legend=1 00:08:54.116 --rc geninfo_all_blocks=1 00:08:54.116 --rc geninfo_unexecuted_blocks=1 00:08:54.116 00:08:54.116 ' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:54.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.116 --rc genhtml_branch_coverage=1 00:08:54.116 --rc genhtml_function_coverage=1 00:08:54.116 --rc genhtml_legend=1 00:08:54.116 --rc geninfo_all_blocks=1 00:08:54.116 --rc geninfo_unexecuted_blocks=1 00:08:54.116 00:08:54.116 ' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:54.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.116 --rc genhtml_branch_coverage=1 00:08:54.116 --rc genhtml_function_coverage=1 00:08:54.116 --rc genhtml_legend=1 00:08:54.116 --rc geninfo_all_blocks=1 00:08:54.116 --rc geninfo_unexecuted_blocks=1 00:08:54.116 00:08:54.116 ' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:54.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:54.116 --rc genhtml_branch_coverage=1 00:08:54.116 --rc genhtml_function_coverage=1 00:08:54.116 --rc genhtml_legend=1 00:08:54.116 --rc geninfo_all_blocks=1 00:08:54.116 --rc geninfo_unexecuted_blocks=1 00:08:54.116 00:08:54.116 ' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:54.116 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:54.116 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:54.117 Cannot find device "nvmf_init_br" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:54.117 Cannot find device "nvmf_init_br2" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:54.117 Cannot find device "nvmf_tgt_br" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:54.117 Cannot find device "nvmf_tgt_br2" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:54.117 Cannot find device "nvmf_init_br" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:54.117 Cannot find device "nvmf_init_br2" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:54.117 Cannot find device "nvmf_tgt_br" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:54.117 Cannot find device "nvmf_tgt_br2" 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:54.117 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:54.376 Cannot find device "nvmf_br" 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:54.376 Cannot find device "nvmf_init_if" 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:54.376 Cannot find device "nvmf_init_if2" 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:54.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:54.376 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:54.376 22:40:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:54.376 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:54.635 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:54.635 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:08:54.635 00:08:54.635 --- 10.0.0.3 ping statistics --- 00:08:54.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.635 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:54.635 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:54.635 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:08:54.635 00:08:54.635 --- 10.0.0.4 ping statistics --- 00:08:54.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.635 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:54.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:08:54.635 00:08:54.635 --- 10.0.0.1 ping statistics --- 00:08:54.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.635 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:08:54.635 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:54.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:08:54.636 00:08:54.636 --- 10.0.0.2 ping statistics --- 00:08:54.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.636 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=75122 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 75122 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 75122 ']' 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.636 22:40:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:54.636 [2024-12-07 22:40:09.283836] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:54.636 [2024-12-07 22:40:09.283965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.895 [2024-12-07 22:40:09.421936] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.895 [2024-12-07 22:40:09.463178] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.895 [2024-12-07 22:40:09.463229] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.895 [2024-12-07 22:40:09.463242] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.895 [2024-12-07 22:40:09.463252] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.895 [2024-12-07 22:40:09.463261] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.895 [2024-12-07 22:40:09.463292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.895 [2024-12-07 22:40:09.496561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:55.832 [2024-12-07 22:40:10.539030] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:55.832 ************************************ 00:08:55.832 START TEST lvs_grow_clean 00:08:55.832 ************************************ 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:55.832 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.400 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:56.400 22:40:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:56.660 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=42158006-b368-4b2a-8656-bf2f01305882 00:08:56.660 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:08:56.660 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:56.920 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:56.920 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:56.920 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 42158006-b368-4b2a-8656-bf2f01305882 lvol 150 00:08:57.179 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=71f160dc-1b11-4917-a22f-a8b86f24f00b 00:08:57.180 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:57.180 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:57.180 [2024-12-07 22:40:11.941735] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:57.180 [2024-12-07 22:40:11.941836] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:57.440 true 00:08:57.440 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:08:57.440 22:40:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:57.699 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:57.699 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:57.958 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 71f160dc-1b11-4917-a22f-a8b86f24f00b 00:08:58.218 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:58.218 [2024-12-07 22:40:12.934326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:58.218 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:58.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75210 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75210 /var/tmp/bdevperf.sock 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 75210 ']' 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.792 22:40:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:58.792 [2024-12-07 22:40:13.290322] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:08:58.792 [2024-12-07 22:40:13.290407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75210 ] 00:08:58.792 [2024-12-07 22:40:13.425423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.792 [2024-12-07 22:40:13.466161] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.792 [2024-12-07 22:40:13.498579] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.728 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:59.728 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:59.728 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:59.987 Nvme0n1 00:08:59.987 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:00.246 [ 00:09:00.246 { 00:09:00.246 "name": "Nvme0n1", 00:09:00.246 "aliases": [ 00:09:00.246 "71f160dc-1b11-4917-a22f-a8b86f24f00b" 00:09:00.246 ], 00:09:00.246 "product_name": "NVMe disk", 00:09:00.246 "block_size": 4096, 00:09:00.246 "num_blocks": 38912, 00:09:00.246 "uuid": "71f160dc-1b11-4917-a22f-a8b86f24f00b", 00:09:00.246 "numa_id": -1, 00:09:00.246 "assigned_rate_limits": { 00:09:00.246 "rw_ios_per_sec": 0, 00:09:00.246 "rw_mbytes_per_sec": 0, 00:09:00.246 "r_mbytes_per_sec": 0, 00:09:00.246 "w_mbytes_per_sec": 0 00:09:00.246 }, 00:09:00.246 "claimed": false, 00:09:00.246 "zoned": false, 00:09:00.247 "supported_io_types": { 00:09:00.247 "read": true, 00:09:00.247 "write": true, 00:09:00.247 "unmap": true, 00:09:00.247 "flush": true, 00:09:00.247 "reset": true, 00:09:00.247 "nvme_admin": true, 00:09:00.247 "nvme_io": true, 00:09:00.247 "nvme_io_md": false, 00:09:00.247 "write_zeroes": true, 00:09:00.247 "zcopy": false, 00:09:00.247 "get_zone_info": false, 00:09:00.247 "zone_management": false, 00:09:00.247 "zone_append": false, 00:09:00.247 "compare": true, 00:09:00.247 "compare_and_write": true, 00:09:00.247 "abort": true, 00:09:00.247 "seek_hole": false, 00:09:00.247 "seek_data": false, 00:09:00.247 "copy": true, 00:09:00.247 "nvme_iov_md": false 00:09:00.247 }, 00:09:00.247 "memory_domains": [ 00:09:00.247 { 00:09:00.247 "dma_device_id": "system", 00:09:00.247 "dma_device_type": 1 00:09:00.247 } 00:09:00.247 ], 00:09:00.247 "driver_specific": { 00:09:00.247 "nvme": [ 00:09:00.247 { 00:09:00.247 "trid": { 00:09:00.247 "trtype": "TCP", 00:09:00.247 "adrfam": "IPv4", 00:09:00.247 "traddr": "10.0.0.3", 00:09:00.247 "trsvcid": "4420", 00:09:00.247 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:00.247 }, 00:09:00.247 "ctrlr_data": { 00:09:00.247 "cntlid": 1, 00:09:00.247 "vendor_id": "0x8086", 00:09:00.247 "model_number": "SPDK bdev Controller", 00:09:00.247 "serial_number": "SPDK0", 00:09:00.247 "firmware_revision": "24.09.1", 00:09:00.247 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:00.247 "oacs": { 00:09:00.247 "security": 0, 00:09:00.247 "format": 0, 00:09:00.247 "firmware": 0, 00:09:00.247 "ns_manage": 0 00:09:00.247 }, 00:09:00.247 "multi_ctrlr": true, 00:09:00.247 "ana_reporting": false 00:09:00.247 }, 00:09:00.247 "vs": { 00:09:00.247 "nvme_version": "1.3" 00:09:00.247 }, 00:09:00.247 "ns_data": { 00:09:00.247 "id": 1, 00:09:00.247 "can_share": true 00:09:00.247 } 00:09:00.247 } 00:09:00.247 ], 00:09:00.247 "mp_policy": "active_passive" 00:09:00.247 } 00:09:00.247 } 00:09:00.247 ] 00:09:00.247 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75233 00:09:00.247 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:00.247 22:40:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:00.247 Running I/O for 10 seconds... 00:09:01.184 Latency(us) 00:09:01.184 [2024-12-07T22:40:15.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.184 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.184 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:01.184 [2024-12-07T22:40:15.950Z] =================================================================================================================== 00:09:01.184 [2024-12-07T22:40:15.950Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:01.184 00:09:02.122 22:40:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:02.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.381 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:02.381 [2024-12-07T22:40:17.147Z] =================================================================================================================== 00:09:02.381 [2024-12-07T22:40:17.147Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:02.381 00:09:02.381 true 00:09:02.641 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:02.641 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:02.901 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:02.901 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:02.901 22:40:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 75233 00:09:03.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.469 Nvme0n1 : 3.00 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:03.469 [2024-12-07T22:40:18.235Z] =================================================================================================================== 00:09:03.469 [2024-12-07T22:40:18.235Z] Total : 6561.67 25.63 0.00 0.00 0.00 0.00 0.00 00:09:03.469 00:09:04.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.406 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:04.406 [2024-12-07T22:40:19.172Z] =================================================================================================================== 00:09:04.406 [2024-12-07T22:40:19.172Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:09:04.406 00:09:05.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.344 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:05.344 [2024-12-07T22:40:20.110Z] =================================================================================================================== 00:09:05.344 [2024-12-07T22:40:20.110Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:05.344 00:09:06.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.281 Nvme0n1 : 6.00 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:09:06.281 [2024-12-07T22:40:21.047Z] =================================================================================================================== 00:09:06.281 [2024-12-07T22:40:21.047Z] Total : 6392.33 24.97 0.00 0.00 0.00 0.00 0.00 00:09:06.281 00:09:07.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.254 Nvme0n1 : 7.00 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:09:07.254 [2024-12-07T22:40:22.020Z] =================================================================================================================== 00:09:07.254 [2024-12-07T22:40:22.020Z] Total : 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:09:07.254 00:09:08.194 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.194 Nvme0n1 : 8.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:08.194 [2024-12-07T22:40:22.960Z] =================================================================================================================== 00:09:08.194 [2024-12-07T22:40:22.960Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:08.194 00:09:09.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.567 Nvme0n1 : 9.00 6364.11 24.86 0.00 0.00 0.00 0.00 0.00 00:09:09.567 [2024-12-07T22:40:24.333Z] =================================================================================================================== 00:09:09.567 [2024-12-07T22:40:24.333Z] Total : 6364.11 24.86 0.00 0.00 0.00 0.00 0.00 00:09:09.567 00:09:10.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.501 Nvme0n1 : 10.00 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:10.501 [2024-12-07T22:40:25.267Z] =================================================================================================================== 00:09:10.501 [2024-12-07T22:40:25.267Z] Total : 6350.00 24.80 0.00 0.00 0.00 0.00 0.00 00:09:10.501 00:09:10.501 00:09:10.501 Latency(us) 00:09:10.501 [2024-12-07T22:40:25.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.501 Nvme0n1 : 10.01 6357.15 24.83 0.00 0.00 20129.66 16920.20 42896.29 00:09:10.501 [2024-12-07T22:40:25.267Z] =================================================================================================================== 00:09:10.501 [2024-12-07T22:40:25.267Z] Total : 6357.15 24.83 0.00 0.00 20129.66 16920.20 42896.29 00:09:10.501 { 00:09:10.501 "results": [ 00:09:10.501 { 00:09:10.501 "job": "Nvme0n1", 00:09:10.501 "core_mask": "0x2", 00:09:10.501 "workload": "randwrite", 00:09:10.501 "status": "finished", 00:09:10.501 "queue_depth": 128, 00:09:10.501 "io_size": 4096, 00:09:10.501 "runtime": 10.008887, 00:09:10.501 "iops": 6357.15040043913, 00:09:10.501 "mibps": 24.83261875171535, 00:09:10.501 "io_failed": 0, 00:09:10.501 "io_timeout": 0, 00:09:10.501 "avg_latency_us": 20129.659354086536, 00:09:10.501 "min_latency_us": 16920.203636363636, 00:09:10.501 "max_latency_us": 42896.29090909091 00:09:10.501 } 00:09:10.501 ], 00:09:10.501 "core_count": 1 00:09:10.501 } 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75210 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 75210 ']' 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 75210 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75210 00:09:10.501 killing process with pid 75210 00:09:10.501 Received shutdown signal, test time was about 10.000000 seconds 00:09:10.501 00:09:10.501 Latency(us) 00:09:10.501 [2024-12-07T22:40:25.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.501 [2024-12-07T22:40:25.267Z] =================================================================================================================== 00:09:10.501 [2024-12-07T22:40:25.267Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75210' 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 75210 00:09:10.501 22:40:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 75210 00:09:10.501 22:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:10.759 22:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:11.017 22:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:11.017 22:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:11.275 22:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:11.275 22:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:11.275 22:40:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:11.534 [2024-12-07 22:40:26.105349] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:11.534 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:11.792 request: 00:09:11.792 { 00:09:11.792 "uuid": "42158006-b368-4b2a-8656-bf2f01305882", 00:09:11.792 "method": "bdev_lvol_get_lvstores", 00:09:11.792 "req_id": 1 00:09:11.792 } 00:09:11.792 Got JSON-RPC error response 00:09:11.792 response: 00:09:11.792 { 00:09:11.792 "code": -19, 00:09:11.792 "message": "No such device" 00:09:11.792 } 00:09:11.792 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:11.792 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.792 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.792 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.792 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.051 aio_bdev 00:09:12.051 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 71f160dc-1b11-4917-a22f-a8b86f24f00b 00:09:12.051 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=71f160dc-1b11-4917-a22f-a8b86f24f00b 00:09:12.051 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.051 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:12.051 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.051 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.051 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:12.310 22:40:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 71f160dc-1b11-4917-a22f-a8b86f24f00b -t 2000 00:09:12.569 [ 00:09:12.569 { 00:09:12.569 "name": "71f160dc-1b11-4917-a22f-a8b86f24f00b", 00:09:12.569 "aliases": [ 00:09:12.569 "lvs/lvol" 00:09:12.569 ], 00:09:12.569 "product_name": "Logical Volume", 00:09:12.569 "block_size": 4096, 00:09:12.569 "num_blocks": 38912, 00:09:12.569 "uuid": "71f160dc-1b11-4917-a22f-a8b86f24f00b", 00:09:12.569 "assigned_rate_limits": { 00:09:12.569 "rw_ios_per_sec": 0, 00:09:12.569 "rw_mbytes_per_sec": 0, 00:09:12.569 "r_mbytes_per_sec": 0, 00:09:12.569 "w_mbytes_per_sec": 0 00:09:12.569 }, 00:09:12.569 "claimed": false, 00:09:12.569 "zoned": false, 00:09:12.569 "supported_io_types": { 00:09:12.569 "read": true, 00:09:12.569 "write": true, 00:09:12.569 "unmap": true, 00:09:12.569 "flush": false, 00:09:12.569 "reset": true, 00:09:12.569 "nvme_admin": false, 00:09:12.569 "nvme_io": false, 00:09:12.569 "nvme_io_md": false, 00:09:12.569 "write_zeroes": true, 00:09:12.569 "zcopy": false, 00:09:12.569 "get_zone_info": false, 00:09:12.569 "zone_management": false, 00:09:12.569 "zone_append": false, 00:09:12.569 "compare": false, 00:09:12.569 "compare_and_write": false, 00:09:12.569 "abort": false, 00:09:12.569 "seek_hole": true, 00:09:12.569 "seek_data": true, 00:09:12.569 "copy": false, 00:09:12.569 "nvme_iov_md": false 00:09:12.569 }, 00:09:12.569 "driver_specific": { 00:09:12.569 "lvol": { 00:09:12.569 "lvol_store_uuid": "42158006-b368-4b2a-8656-bf2f01305882", 00:09:12.569 "base_bdev": "aio_bdev", 00:09:12.569 "thin_provision": false, 00:09:12.569 "num_allocated_clusters": 38, 00:09:12.569 "snapshot": false, 00:09:12.569 "clone": false, 00:09:12.569 "esnap_clone": false 00:09:12.569 } 00:09:12.569 } 00:09:12.569 } 00:09:12.569 ] 00:09:12.569 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:12.569 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:12.569 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:12.827 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:12.827 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:12.827 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:13.086 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:13.086 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 71f160dc-1b11-4917-a22f-a8b86f24f00b 00:09:13.344 22:40:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42158006-b368-4b2a-8656-bf2f01305882 00:09:13.602 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:13.861 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.120 ************************************ 00:09:14.120 END TEST lvs_grow_clean 00:09:14.120 ************************************ 00:09:14.120 00:09:14.120 real 0m18.215s 00:09:14.120 user 0m17.388s 00:09:14.120 sys 0m2.440s 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.120 ************************************ 00:09:14.120 START TEST lvs_grow_dirty 00:09:14.120 ************************************ 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:14.120 22:40:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.688 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.688 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:14.688 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:14.688 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:14.688 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:14.947 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:14.947 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:14.947 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf lvol 150 00:09:15.206 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=5b43960d-aabc-4068-b1d6-40cc28f32ced 00:09:15.207 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:15.207 22:40:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.465 [2024-12-07 22:40:30.128541] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.466 [2024-12-07 22:40:30.128629] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.466 true 00:09:15.466 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:15.466 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.724 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.724 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.983 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5b43960d-aabc-4068-b1d6-40cc28f32ced 00:09:16.242 22:40:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:16.502 [2024-12-07 22:40:31.201112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:16.502 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:16.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75483 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75483 /var/tmp/bdevperf.sock 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75483 ']' 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.762 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.762 [2024-12-07 22:40:31.514852] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:16.762 [2024-12-07 22:40:31.515229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75483 ] 00:09:17.021 [2024-12-07 22:40:31.653315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.021 [2024-12-07 22:40:31.695269] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.021 [2024-12-07 22:40:31.727577] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:17.021 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.021 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:17.021 22:40:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:17.589 Nvme0n1 00:09:17.589 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:17.849 [ 00:09:17.849 { 00:09:17.849 "name": "Nvme0n1", 00:09:17.849 "aliases": [ 00:09:17.849 "5b43960d-aabc-4068-b1d6-40cc28f32ced" 00:09:17.849 ], 00:09:17.849 "product_name": "NVMe disk", 00:09:17.849 "block_size": 4096, 00:09:17.849 "num_blocks": 38912, 00:09:17.849 "uuid": "5b43960d-aabc-4068-b1d6-40cc28f32ced", 00:09:17.849 "numa_id": -1, 00:09:17.849 "assigned_rate_limits": { 00:09:17.849 "rw_ios_per_sec": 0, 00:09:17.849 "rw_mbytes_per_sec": 0, 00:09:17.849 "r_mbytes_per_sec": 0, 00:09:17.849 "w_mbytes_per_sec": 0 00:09:17.849 }, 00:09:17.849 "claimed": false, 00:09:17.849 "zoned": false, 00:09:17.849 "supported_io_types": { 00:09:17.849 "read": true, 00:09:17.849 "write": true, 00:09:17.849 "unmap": true, 00:09:17.849 "flush": true, 00:09:17.849 "reset": true, 00:09:17.849 "nvme_admin": true, 00:09:17.849 "nvme_io": true, 00:09:17.849 "nvme_io_md": false, 00:09:17.849 "write_zeroes": true, 00:09:17.849 "zcopy": false, 00:09:17.849 "get_zone_info": false, 00:09:17.849 "zone_management": false, 00:09:17.849 "zone_append": false, 00:09:17.849 "compare": true, 00:09:17.849 "compare_and_write": true, 00:09:17.849 "abort": true, 00:09:17.849 "seek_hole": false, 00:09:17.849 "seek_data": false, 00:09:17.849 "copy": true, 00:09:17.849 "nvme_iov_md": false 00:09:17.849 }, 00:09:17.849 "memory_domains": [ 00:09:17.849 { 00:09:17.849 "dma_device_id": "system", 00:09:17.849 "dma_device_type": 1 00:09:17.849 } 00:09:17.849 ], 00:09:17.849 "driver_specific": { 00:09:17.849 "nvme": [ 00:09:17.849 { 00:09:17.849 "trid": { 00:09:17.849 "trtype": "TCP", 00:09:17.849 "adrfam": "IPv4", 00:09:17.849 "traddr": "10.0.0.3", 00:09:17.849 "trsvcid": "4420", 00:09:17.849 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:17.849 }, 00:09:17.849 "ctrlr_data": { 00:09:17.849 "cntlid": 1, 00:09:17.849 "vendor_id": "0x8086", 00:09:17.849 "model_number": "SPDK bdev Controller", 00:09:17.849 "serial_number": "SPDK0", 00:09:17.849 "firmware_revision": "24.09.1", 00:09:17.849 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.849 "oacs": { 00:09:17.849 "security": 0, 00:09:17.849 "format": 0, 00:09:17.849 "firmware": 0, 00:09:17.849 "ns_manage": 0 00:09:17.849 }, 00:09:17.849 "multi_ctrlr": true, 00:09:17.849 "ana_reporting": false 00:09:17.849 }, 00:09:17.849 "vs": { 00:09:17.849 "nvme_version": "1.3" 00:09:17.849 }, 00:09:17.849 "ns_data": { 00:09:17.849 "id": 1, 00:09:17.849 "can_share": true 00:09:17.849 } 00:09:17.849 } 00:09:17.849 ], 00:09:17.849 "mp_policy": "active_passive" 00:09:17.849 } 00:09:17.849 } 00:09:17.849 ] 00:09:17.849 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75499 00:09:17.849 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.849 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:17.849 Running I/O for 10 seconds... 00:09:18.787 Latency(us) 00:09:18.787 [2024-12-07T22:40:33.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.787 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:18.787 [2024-12-07T22:40:33.553Z] =================================================================================================================== 00:09:18.787 [2024-12-07T22:40:33.553Z] Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:18.787 00:09:19.724 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:19.724 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.724 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:19.724 [2024-12-07T22:40:34.490Z] =================================================================================================================== 00:09:19.724 [2024-12-07T22:40:34.490Z] Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:19.724 00:09:19.983 true 00:09:19.983 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:19.983 22:40:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:20.550 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:20.550 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:20.550 22:40:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 75499 00:09:20.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.817 Nvme0n1 : 3.00 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:20.817 [2024-12-07T22:40:35.583Z] =================================================================================================================== 00:09:20.817 [2024-12-07T22:40:35.583Z] Total : 6773.33 26.46 0.00 0.00 0.00 0.00 0.00 00:09:20.817 00:09:21.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.796 Nvme0n1 : 4.00 6561.50 25.63 0.00 0.00 0.00 0.00 0.00 00:09:21.796 [2024-12-07T22:40:36.562Z] =================================================================================================================== 00:09:21.796 [2024-12-07T22:40:36.562Z] Total : 6561.50 25.63 0.00 0.00 0.00 0.00 0.00 00:09:21.796 00:09:22.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.735 Nvme0n1 : 5.00 6519.20 25.47 0.00 0.00 0.00 0.00 0.00 00:09:22.735 [2024-12-07T22:40:37.501Z] =================================================================================================================== 00:09:22.735 [2024-12-07T22:40:37.501Z] Total : 6519.20 25.47 0.00 0.00 0.00 0.00 0.00 00:09:22.735 00:09:24.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.117 Nvme0n1 : 6.00 6491.00 25.36 0.00 0.00 0.00 0.00 0.00 00:09:24.117 [2024-12-07T22:40:38.883Z] =================================================================================================================== 00:09:24.117 [2024-12-07T22:40:38.883Z] Total : 6491.00 25.36 0.00 0.00 0.00 0.00 0.00 00:09:24.117 00:09:25.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.053 Nvme0n1 : 7.00 6489.00 25.35 0.00 0.00 0.00 0.00 0.00 00:09:25.053 [2024-12-07T22:40:39.819Z] =================================================================================================================== 00:09:25.053 [2024-12-07T22:40:39.819Z] Total : 6489.00 25.35 0.00 0.00 0.00 0.00 0.00 00:09:25.053 00:09:25.988 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.988 Nvme0n1 : 8.00 6440.50 25.16 0.00 0.00 0.00 0.00 0.00 00:09:25.988 [2024-12-07T22:40:40.754Z] =================================================================================================================== 00:09:25.988 [2024-12-07T22:40:40.754Z] Total : 6440.50 25.16 0.00 0.00 0.00 0.00 0.00 00:09:25.988 00:09:26.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.924 Nvme0n1 : 9.00 6416.33 25.06 0.00 0.00 0.00 0.00 0.00 00:09:26.924 [2024-12-07T22:40:41.690Z] =================================================================================================================== 00:09:26.924 [2024-12-07T22:40:41.690Z] Total : 6416.33 25.06 0.00 0.00 0.00 0.00 0.00 00:09:26.924 00:09:27.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.861 Nvme0n1 : 10.00 6409.70 25.04 0.00 0.00 0.00 0.00 0.00 00:09:27.861 [2024-12-07T22:40:42.627Z] =================================================================================================================== 00:09:27.861 [2024-12-07T22:40:42.627Z] Total : 6409.70 25.04 0.00 0.00 0.00 0.00 0.00 00:09:27.861 00:09:27.861 00:09:27.861 Latency(us) 00:09:27.861 [2024-12-07T22:40:42.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.861 Nvme0n1 : 10.01 6417.52 25.07 0.00 0.00 19940.22 6225.92 122016.12 00:09:27.861 [2024-12-07T22:40:42.627Z] =================================================================================================================== 00:09:27.861 [2024-12-07T22:40:42.627Z] Total : 6417.52 25.07 0.00 0.00 19940.22 6225.92 122016.12 00:09:27.861 { 00:09:27.861 "results": [ 00:09:27.861 { 00:09:27.861 "job": "Nvme0n1", 00:09:27.861 "core_mask": "0x2", 00:09:27.861 "workload": "randwrite", 00:09:27.861 "status": "finished", 00:09:27.861 "queue_depth": 128, 00:09:27.861 "io_size": 4096, 00:09:27.861 "runtime": 10.007759, 00:09:27.861 "iops": 6417.520645730978, 00:09:27.861 "mibps": 25.068440022386632, 00:09:27.861 "io_failed": 0, 00:09:27.861 "io_timeout": 0, 00:09:27.861 "avg_latency_us": 19940.218194047913, 00:09:27.861 "min_latency_us": 6225.92, 00:09:27.861 "max_latency_us": 122016.11636363636 00:09:27.861 } 00:09:27.861 ], 00:09:27.861 "core_count": 1 00:09:27.861 } 00:09:27.861 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75483 00:09:27.861 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 75483 ']' 00:09:27.861 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 75483 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75483 00:09:27.862 killing process with pid 75483 00:09:27.862 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.862 00:09:27.862 Latency(us) 00:09:27.862 [2024-12-07T22:40:42.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.862 [2024-12-07T22:40:42.628Z] =================================================================================================================== 00:09:27.862 [2024-12-07T22:40:42.628Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75483' 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 75483 00:09:27.862 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 75483 00:09:28.121 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:28.381 22:40:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.640 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:28.640 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 75122 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 75122 00:09:28.900 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 75122 Killed "${NVMF_APP[@]}" "$@" 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=75638 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 75638 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75638 ']' 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.900 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.160 [2024-12-07 22:40:43.665088] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:29.160 [2024-12-07 22:40:43.665191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:29.160 [2024-12-07 22:40:43.804111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.160 [2024-12-07 22:40:43.837148] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:29.160 [2024-12-07 22:40:43.837473] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:29.160 [2024-12-07 22:40:43.837511] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:29.160 [2024-12-07 22:40:43.837519] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:29.160 [2024-12-07 22:40:43.837526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:29.160 [2024-12-07 22:40:43.837555] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.160 [2024-12-07 22:40:43.865801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:29.160 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.160 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:29.160 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:29.160 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.160 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.419 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.419 22:40:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.419 [2024-12-07 22:40:44.169602] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:29.419 [2024-12-07 22:40:44.169933] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:29.419 [2024-12-07 22:40:44.170119] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 5b43960d-aabc-4068-b1d6-40cc28f32ced 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5b43960d-aabc-4068-b1d6-40cc28f32ced 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.677 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.935 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b43960d-aabc-4068-b1d6-40cc28f32ced -t 2000 00:09:30.193 [ 00:09:30.193 { 00:09:30.193 "name": "5b43960d-aabc-4068-b1d6-40cc28f32ced", 00:09:30.193 "aliases": [ 00:09:30.193 "lvs/lvol" 00:09:30.193 ], 00:09:30.193 "product_name": "Logical Volume", 00:09:30.193 "block_size": 4096, 00:09:30.193 "num_blocks": 38912, 00:09:30.193 "uuid": "5b43960d-aabc-4068-b1d6-40cc28f32ced", 00:09:30.193 "assigned_rate_limits": { 00:09:30.193 "rw_ios_per_sec": 0, 00:09:30.193 "rw_mbytes_per_sec": 0, 00:09:30.193 "r_mbytes_per_sec": 0, 00:09:30.193 "w_mbytes_per_sec": 0 00:09:30.194 }, 00:09:30.194 "claimed": false, 00:09:30.194 "zoned": false, 00:09:30.194 "supported_io_types": { 00:09:30.194 "read": true, 00:09:30.194 "write": true, 00:09:30.194 "unmap": true, 00:09:30.194 "flush": false, 00:09:30.194 "reset": true, 00:09:30.194 "nvme_admin": false, 00:09:30.194 "nvme_io": false, 00:09:30.194 "nvme_io_md": false, 00:09:30.194 "write_zeroes": true, 00:09:30.194 "zcopy": false, 00:09:30.194 "get_zone_info": false, 00:09:30.194 "zone_management": false, 00:09:30.194 "zone_append": false, 00:09:30.194 "compare": false, 00:09:30.194 "compare_and_write": false, 00:09:30.194 "abort": false, 00:09:30.194 "seek_hole": true, 00:09:30.194 "seek_data": true, 00:09:30.194 "copy": false, 00:09:30.194 "nvme_iov_md": false 00:09:30.194 }, 00:09:30.194 "driver_specific": { 00:09:30.194 "lvol": { 00:09:30.194 "lvol_store_uuid": "6dea4f6e-3ed6-469e-bacc-510560aff2cf", 00:09:30.194 "base_bdev": "aio_bdev", 00:09:30.194 "thin_provision": false, 00:09:30.194 "num_allocated_clusters": 38, 00:09:30.194 "snapshot": false, 00:09:30.194 "clone": false, 00:09:30.194 "esnap_clone": false 00:09:30.194 } 00:09:30.194 } 00:09:30.194 } 00:09:30.194 ] 00:09:30.194 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:30.194 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:30.194 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:30.452 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:30.452 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:30.452 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:30.709 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:30.709 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.967 [2024-12-07 22:40:45.667482] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:30.967 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:31.226 request: 00:09:31.226 { 00:09:31.226 "uuid": "6dea4f6e-3ed6-469e-bacc-510560aff2cf", 00:09:31.226 "method": "bdev_lvol_get_lvstores", 00:09:31.226 "req_id": 1 00:09:31.226 } 00:09:31.226 Got JSON-RPC error response 00:09:31.226 response: 00:09:31.226 { 00:09:31.226 "code": -19, 00:09:31.226 "message": "No such device" 00:09:31.226 } 00:09:31.226 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:31.226 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:31.226 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:31.226 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:31.226 22:40:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:31.484 aio_bdev 00:09:31.742 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5b43960d-aabc-4068-b1d6-40cc28f32ced 00:09:31.742 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=5b43960d-aabc-4068-b1d6-40cc28f32ced 00:09:31.742 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:31.742 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:31.742 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:31.742 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:31.742 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:32.001 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5b43960d-aabc-4068-b1d6-40cc28f32ced -t 2000 00:09:32.260 [ 00:09:32.260 { 00:09:32.260 "name": "5b43960d-aabc-4068-b1d6-40cc28f32ced", 00:09:32.260 "aliases": [ 00:09:32.260 "lvs/lvol" 00:09:32.260 ], 00:09:32.260 "product_name": "Logical Volume", 00:09:32.260 "block_size": 4096, 00:09:32.260 "num_blocks": 38912, 00:09:32.260 "uuid": "5b43960d-aabc-4068-b1d6-40cc28f32ced", 00:09:32.260 "assigned_rate_limits": { 00:09:32.260 "rw_ios_per_sec": 0, 00:09:32.260 "rw_mbytes_per_sec": 0, 00:09:32.260 "r_mbytes_per_sec": 0, 00:09:32.260 "w_mbytes_per_sec": 0 00:09:32.260 }, 00:09:32.260 "claimed": false, 00:09:32.260 "zoned": false, 00:09:32.260 "supported_io_types": { 00:09:32.260 "read": true, 00:09:32.260 "write": true, 00:09:32.260 "unmap": true, 00:09:32.260 "flush": false, 00:09:32.260 "reset": true, 00:09:32.260 "nvme_admin": false, 00:09:32.260 "nvme_io": false, 00:09:32.260 "nvme_io_md": false, 00:09:32.260 "write_zeroes": true, 00:09:32.260 "zcopy": false, 00:09:32.260 "get_zone_info": false, 00:09:32.260 "zone_management": false, 00:09:32.260 "zone_append": false, 00:09:32.260 "compare": false, 00:09:32.260 "compare_and_write": false, 00:09:32.260 "abort": false, 00:09:32.260 "seek_hole": true, 00:09:32.260 "seek_data": true, 00:09:32.260 "copy": false, 00:09:32.260 "nvme_iov_md": false 00:09:32.260 }, 00:09:32.260 "driver_specific": { 00:09:32.260 "lvol": { 00:09:32.260 "lvol_store_uuid": "6dea4f6e-3ed6-469e-bacc-510560aff2cf", 00:09:32.260 "base_bdev": "aio_bdev", 00:09:32.260 "thin_provision": false, 00:09:32.260 "num_allocated_clusters": 38, 00:09:32.260 "snapshot": false, 00:09:32.260 "clone": false, 00:09:32.260 "esnap_clone": false 00:09:32.260 } 00:09:32.260 } 00:09:32.260 } 00:09:32.260 ] 00:09:32.260 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:32.260 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:32.260 22:40:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:32.519 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:32.519 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:32.519 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:32.778 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:32.778 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5b43960d-aabc-4068-b1d6-40cc28f32ced 00:09:33.037 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6dea4f6e-3ed6-469e-bacc-510560aff2cf 00:09:33.294 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.553 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:33.812 ************************************ 00:09:33.812 END TEST lvs_grow_dirty 00:09:33.812 ************************************ 00:09:33.812 00:09:33.812 real 0m19.593s 00:09:33.812 user 0m40.584s 00:09:33.812 sys 0m9.085s 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:33.812 nvmf_trace.0 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:33.812 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:34.071 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:34.071 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:34.071 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:34.071 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:34.071 rmmod nvme_tcp 00:09:34.329 rmmod nvme_fabrics 00:09:34.329 rmmod nvme_keyring 00:09:34.329 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:34.329 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 75638 ']' 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 75638 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 75638 ']' 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 75638 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75638 00:09:34.330 killing process with pid 75638 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75638' 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 75638 00:09:34.330 22:40:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 75638 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:34.330 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:34.588 00:09:34.588 real 0m40.728s 00:09:34.588 user 1m4.168s 00:09:34.588 sys 0m12.385s 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.588 ************************************ 00:09:34.588 END TEST nvmf_lvs_grow 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:34.588 ************************************ 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:34.588 ************************************ 00:09:34.588 START TEST nvmf_bdev_io_wait 00:09:34.588 ************************************ 00:09:34.588 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:34.847 * Looking for test storage... 00:09:34.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.847 --rc genhtml_branch_coverage=1 00:09:34.847 --rc genhtml_function_coverage=1 00:09:34.847 --rc genhtml_legend=1 00:09:34.847 --rc geninfo_all_blocks=1 00:09:34.847 --rc geninfo_unexecuted_blocks=1 00:09:34.847 00:09:34.847 ' 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.847 --rc genhtml_branch_coverage=1 00:09:34.847 --rc genhtml_function_coverage=1 00:09:34.847 --rc genhtml_legend=1 00:09:34.847 --rc geninfo_all_blocks=1 00:09:34.847 --rc geninfo_unexecuted_blocks=1 00:09:34.847 00:09:34.847 ' 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.847 --rc genhtml_branch_coverage=1 00:09:34.847 --rc genhtml_function_coverage=1 00:09:34.847 --rc genhtml_legend=1 00:09:34.847 --rc geninfo_all_blocks=1 00:09:34.847 --rc geninfo_unexecuted_blocks=1 00:09:34.847 00:09:34.847 ' 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:34.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.847 --rc genhtml_branch_coverage=1 00:09:34.847 --rc genhtml_function_coverage=1 00:09:34.847 --rc genhtml_legend=1 00:09:34.847 --rc geninfo_all_blocks=1 00:09:34.847 --rc geninfo_unexecuted_blocks=1 00:09:34.847 00:09:34.847 ' 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:34.847 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:34.848 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:34.848 Cannot find device "nvmf_init_br" 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:34.848 Cannot find device "nvmf_init_br2" 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:34.848 Cannot find device "nvmf_tgt_br" 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:34.848 Cannot find device "nvmf_tgt_br2" 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:34.848 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:35.107 Cannot find device "nvmf_init_br" 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:35.107 Cannot find device "nvmf_init_br2" 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:35.107 Cannot find device "nvmf_tgt_br" 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:35.107 Cannot find device "nvmf_tgt_br2" 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:35.107 Cannot find device "nvmf_br" 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:35.107 Cannot find device "nvmf_init_if" 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:35.107 Cannot find device "nvmf_init_if2" 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:35.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:35.107 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:35.107 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:35.108 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:35.366 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:35.366 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:09:35.366 00:09:35.366 --- 10.0.0.3 ping statistics --- 00:09:35.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.366 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:35.366 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:35.366 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:09:35.366 00:09:35.366 --- 10.0.0.4 ping statistics --- 00:09:35.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.366 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:09:35.366 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:35.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:35.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:09:35.367 00:09:35.367 --- 10.0.0.1 ping statistics --- 00:09:35.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.367 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:35.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:35.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:09:35.367 00:09:35.367 --- 10.0.0.2 ping statistics --- 00:09:35.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:35.367 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=76003 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 76003 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 76003 ']' 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:35.367 22:40:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.367 [2024-12-07 22:40:49.994631] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:35.367 [2024-12-07 22:40:49.995429] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:35.626 [2024-12-07 22:40:50.138533] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.626 [2024-12-07 22:40:50.180740] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:35.626 [2024-12-07 22:40:50.181058] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:35.626 [2024-12-07 22:40:50.181083] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:35.626 [2024-12-07 22:40:50.181093] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:35.626 [2024-12-07 22:40:50.181102] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:35.626 [2024-12-07 22:40:50.181210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.626 [2024-12-07 22:40:50.181957] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.626 [2024-12-07 22:40:50.182038] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:35.626 [2024-12-07 22:40:50.182045] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.626 [2024-12-07 22:40:50.340515] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.626 [2024-12-07 22:40:50.355339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.626 Malloc0 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.626 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:35.885 [2024-12-07 22:40:50.412116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76025 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:35.885 { 00:09:35.885 "params": { 00:09:35.885 "name": "Nvme$subsystem", 00:09:35.885 "trtype": "$TEST_TRANSPORT", 00:09:35.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.885 "adrfam": "ipv4", 00:09:35.885 "trsvcid": "$NVMF_PORT", 00:09:35.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.885 "hdgst": ${hdgst:-false}, 00:09:35.885 "ddgst": ${ddgst:-false} 00:09:35.885 }, 00:09:35.885 "method": "bdev_nvme_attach_controller" 00:09:35.885 } 00:09:35.885 EOF 00:09:35.885 )") 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76027 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:35.885 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76030 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:35.886 { 00:09:35.886 "params": { 00:09:35.886 "name": "Nvme$subsystem", 00:09:35.886 "trtype": "$TEST_TRANSPORT", 00:09:35.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.886 "adrfam": "ipv4", 00:09:35.886 "trsvcid": "$NVMF_PORT", 00:09:35.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.886 "hdgst": ${hdgst:-false}, 00:09:35.886 "ddgst": ${ddgst:-false} 00:09:35.886 }, 00:09:35.886 "method": "bdev_nvme_attach_controller" 00:09:35.886 } 00:09:35.886 EOF 00:09:35.886 )") 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76032 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:35.886 { 00:09:35.886 "params": { 00:09:35.886 "name": "Nvme$subsystem", 00:09:35.886 "trtype": "$TEST_TRANSPORT", 00:09:35.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.886 "adrfam": "ipv4", 00:09:35.886 "trsvcid": "$NVMF_PORT", 00:09:35.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.886 "hdgst": ${hdgst:-false}, 00:09:35.886 "ddgst": ${ddgst:-false} 00:09:35.886 }, 00:09:35.886 "method": "bdev_nvme_attach_controller" 00:09:35.886 } 00:09:35.886 EOF 00:09:35.886 )") 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:35.886 "params": { 00:09:35.886 "name": "Nvme1", 00:09:35.886 "trtype": "tcp", 00:09:35.886 "traddr": "10.0.0.3", 00:09:35.886 "adrfam": "ipv4", 00:09:35.886 "trsvcid": "4420", 00:09:35.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.886 "hdgst": false, 00:09:35.886 "ddgst": false 00:09:35.886 }, 00:09:35.886 "method": "bdev_nvme_attach_controller" 00:09:35.886 }' 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:35.886 { 00:09:35.886 "params": { 00:09:35.886 "name": "Nvme$subsystem", 00:09:35.886 "trtype": "$TEST_TRANSPORT", 00:09:35.886 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:35.886 "adrfam": "ipv4", 00:09:35.886 "trsvcid": "$NVMF_PORT", 00:09:35.886 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:35.886 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:35.886 "hdgst": ${hdgst:-false}, 00:09:35.886 "ddgst": ${ddgst:-false} 00:09:35.886 }, 00:09:35.886 "method": "bdev_nvme_attach_controller" 00:09:35.886 } 00:09:35.886 EOF 00:09:35.886 )") 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:35.886 "params": { 00:09:35.886 "name": "Nvme1", 00:09:35.886 "trtype": "tcp", 00:09:35.886 "traddr": "10.0.0.3", 00:09:35.886 "adrfam": "ipv4", 00:09:35.886 "trsvcid": "4420", 00:09:35.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.886 "hdgst": false, 00:09:35.886 "ddgst": false 00:09:35.886 }, 00:09:35.886 "method": "bdev_nvme_attach_controller" 00:09:35.886 }' 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:35.886 "params": { 00:09:35.886 "name": "Nvme1", 00:09:35.886 "trtype": "tcp", 00:09:35.886 "traddr": "10.0.0.3", 00:09:35.886 "adrfam": "ipv4", 00:09:35.886 "trsvcid": "4420", 00:09:35.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.886 "hdgst": false, 00:09:35.886 "ddgst": false 00:09:35.886 }, 00:09:35.886 "method": "bdev_nvme_attach_controller" 00:09:35.886 }' 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:35.886 "params": { 00:09:35.886 "name": "Nvme1", 00:09:35.886 "trtype": "tcp", 00:09:35.886 "traddr": "10.0.0.3", 00:09:35.886 "adrfam": "ipv4", 00:09:35.886 "trsvcid": "4420", 00:09:35.886 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:35.886 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:35.886 "hdgst": false, 00:09:35.886 "ddgst": false 00:09:35.886 }, 00:09:35.886 "method": "bdev_nvme_attach_controller" 00:09:35.886 }' 00:09:35.886 [2024-12-07 22:40:50.476496] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:35.886 [2024-12-07 22:40:50.476591] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:35.886 22:40:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76025 00:09:35.886 [2024-12-07 22:40:50.487556] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:35.886 [2024-12-07 22:40:50.487632] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:35.886 [2024-12-07 22:40:50.493666] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:35.886 [2024-12-07 22:40:50.493741] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:35.886 [2024-12-07 22:40:50.495001] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:35.886 [2024-12-07 22:40:50.495076] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:36.150 [2024-12-07 22:40:50.657594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.150 [2024-12-07 22:40:50.687381] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:36.150 [2024-12-07 22:40:50.696418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.150 [2024-12-07 22:40:50.721902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.150 [2024-12-07 22:40:50.723214] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:36.150 [2024-12-07 22:40:50.741687] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.150 [2024-12-07 22:40:50.768447] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:36.150 [2024-12-07 22:40:50.769074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.150 [2024-12-07 22:40:50.795438] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.150 [2024-12-07 22:40:50.814578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.150 [2024-12-07 22:40:50.822468] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:36.150 Running I/O for 1 seconds... 00:09:36.150 [2024-12-07 22:40:50.857055] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:36.150 Running I/O for 1 seconds... 00:09:36.427 Running I/O for 1 seconds... 00:09:36.427 Running I/O for 1 seconds... 00:09:37.371 164600.00 IOPS, 642.97 MiB/s 00:09:37.371 Latency(us) 00:09:37.371 [2024-12-07T22:40:52.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.371 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:37.371 Nvme1n1 : 1.00 164271.24 641.68 0.00 0.00 775.18 404.01 1995.87 00:09:37.371 [2024-12-07T22:40:52.137Z] =================================================================================================================== 00:09:37.371 [2024-12-07T22:40:52.137Z] Total : 164271.24 641.68 0.00 0.00 775.18 404.01 1995.87 00:09:37.371 10179.00 IOPS, 39.76 MiB/s 00:09:37.371 Latency(us) 00:09:37.371 [2024-12-07T22:40:52.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.371 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:37.371 Nvme1n1 : 1.01 10232.45 39.97 0.00 0.00 12456.98 6821.70 20018.27 00:09:37.371 [2024-12-07T22:40:52.137Z] =================================================================================================================== 00:09:37.371 [2024-12-07T22:40:52.137Z] Total : 10232.45 39.97 0.00 0.00 12456.98 6821.70 20018.27 00:09:37.371 7316.00 IOPS, 28.58 MiB/s 00:09:37.371 Latency(us) 00:09:37.371 [2024-12-07T22:40:52.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.371 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:37.371 Nvme1n1 : 1.01 7364.71 28.77 0.00 0.00 17278.23 9532.51 26810.18 00:09:37.371 [2024-12-07T22:40:52.137Z] =================================================================================================================== 00:09:37.371 [2024-12-07T22:40:52.137Z] Total : 7364.71 28.77 0.00 0.00 17278.23 9532.51 26810.18 00:09:37.371 8410.00 IOPS, 32.85 MiB/s 00:09:37.371 Latency(us) 00:09:37.371 [2024-12-07T22:40:52.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.372 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:37.372 Nvme1n1 : 1.01 8492.25 33.17 0.00 0.00 15010.01 6642.97 24665.37 00:09:37.372 [2024-12-07T22:40:52.138Z] =================================================================================================================== 00:09:37.372 [2024-12-07T22:40:52.138Z] Total : 8492.25 33.17 0.00 0.00 15010.01 6642.97 24665.37 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76027 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76030 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76032 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:37.372 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:37.630 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.631 rmmod nvme_tcp 00:09:37.631 rmmod nvme_fabrics 00:09:37.631 rmmod nvme_keyring 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 76003 ']' 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 76003 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 76003 ']' 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 76003 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76003 00:09:37.631 killing process with pid 76003 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76003' 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 76003 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 76003 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:37.631 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:37.891 00:09:37.891 real 0m3.285s 00:09:37.891 user 0m12.933s 00:09:37.891 sys 0m2.140s 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:37.891 ************************************ 00:09:37.891 END TEST nvmf_bdev_io_wait 00:09:37.891 ************************************ 00:09:37.891 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.150 ************************************ 00:09:38.150 START TEST nvmf_queue_depth 00:09:38.150 ************************************ 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:38.150 * Looking for test storage... 00:09:38.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:38.150 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:38.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.151 --rc genhtml_branch_coverage=1 00:09:38.151 --rc genhtml_function_coverage=1 00:09:38.151 --rc genhtml_legend=1 00:09:38.151 --rc geninfo_all_blocks=1 00:09:38.151 --rc geninfo_unexecuted_blocks=1 00:09:38.151 00:09:38.151 ' 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:38.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.151 --rc genhtml_branch_coverage=1 00:09:38.151 --rc genhtml_function_coverage=1 00:09:38.151 --rc genhtml_legend=1 00:09:38.151 --rc geninfo_all_blocks=1 00:09:38.151 --rc geninfo_unexecuted_blocks=1 00:09:38.151 00:09:38.151 ' 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:38.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.151 --rc genhtml_branch_coverage=1 00:09:38.151 --rc genhtml_function_coverage=1 00:09:38.151 --rc genhtml_legend=1 00:09:38.151 --rc geninfo_all_blocks=1 00:09:38.151 --rc geninfo_unexecuted_blocks=1 00:09:38.151 00:09:38.151 ' 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:38.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:38.151 --rc genhtml_branch_coverage=1 00:09:38.151 --rc genhtml_function_coverage=1 00:09:38.151 --rc genhtml_legend=1 00:09:38.151 --rc geninfo_all_blocks=1 00:09:38.151 --rc geninfo_unexecuted_blocks=1 00:09:38.151 00:09:38.151 ' 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:38.151 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:38.411 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:38.411 Cannot find device "nvmf_init_br" 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:38.411 Cannot find device "nvmf_init_br2" 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:38.411 Cannot find device "nvmf_tgt_br" 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.411 Cannot find device "nvmf_tgt_br2" 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:38.411 Cannot find device "nvmf_init_br" 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:38.411 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:38.411 Cannot find device "nvmf_init_br2" 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:38.411 Cannot find device "nvmf_tgt_br" 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:38.411 Cannot find device "nvmf_tgt_br2" 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:38.411 Cannot find device "nvmf_br" 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:38.411 Cannot find device "nvmf_init_if" 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:38.411 Cannot find device "nvmf_init_if2" 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:38.411 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:38.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:38.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.110 ms 00:09:38.671 00:09:38.671 --- 10.0.0.3 ping statistics --- 00:09:38.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.671 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:38.671 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:38.671 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:09:38.671 00:09:38.671 --- 10.0.0.4 ping statistics --- 00:09:38.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.671 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:38.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:09:38.671 00:09:38.671 --- 10.0.0.1 ping statistics --- 00:09:38.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.671 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:38.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:09:38.671 00:09:38.671 --- 10.0.0.2 ping statistics --- 00:09:38.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.671 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=76294 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 76294 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76294 ']' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.671 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.671 [2024-12-07 22:40:53.433207] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:38.671 [2024-12-07 22:40:53.433322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.931 [2024-12-07 22:40:53.575225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.931 [2024-12-07 22:40:53.617936] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.931 [2024-12-07 22:40:53.617998] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.931 [2024-12-07 22:40:53.618013] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.931 [2024-12-07 22:40:53.618023] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.931 [2024-12-07 22:40:53.618032] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.931 [2024-12-07 22:40:53.618064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.931 [2024-12-07 22:40:53.652290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 [2024-12-07 22:40:53.746584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 Malloc0 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 [2024-12-07 22:40:53.799083] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:39.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=76319 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 76319 /var/tmp/bdevperf.sock 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76319 ']' 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.191 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.191 [2024-12-07 22:40:53.862284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:39.191 [2024-12-07 22:40:53.862608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76319 ] 00:09:39.450 [2024-12-07 22:40:54.002315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.450 [2024-12-07 22:40:54.035564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.450 [2024-12-07 22:40:54.063714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.450 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.450 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:39.450 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:39.450 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.450 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.450 NVMe0n1 00:09:39.450 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.451 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:39.709 Running I/O for 10 seconds... 00:09:41.581 7202.00 IOPS, 28.13 MiB/s [2024-12-07T22:40:57.724Z] 7814.00 IOPS, 30.52 MiB/s [2024-12-07T22:40:58.662Z] 8163.67 IOPS, 31.89 MiB/s [2024-12-07T22:40:59.600Z] 8461.25 IOPS, 33.05 MiB/s [2024-12-07T22:41:00.537Z] 8575.40 IOPS, 33.50 MiB/s [2024-12-07T22:41:01.473Z] 8635.67 IOPS, 33.73 MiB/s [2024-12-07T22:41:02.410Z] 8716.14 IOPS, 34.05 MiB/s [2024-12-07T22:41:03.785Z] 8737.12 IOPS, 34.13 MiB/s [2024-12-07T22:41:04.353Z] 8748.78 IOPS, 34.17 MiB/s [2024-12-07T22:41:04.611Z] 8724.90 IOPS, 34.08 MiB/s 00:09:49.845 Latency(us) 00:09:49.845 [2024-12-07T22:41:04.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.845 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:49.845 Verification LBA range: start 0x0 length 0x4000 00:09:49.845 NVMe0n1 : 10.07 8762.19 34.23 0.00 0.00 116382.30 16086.11 87222.46 00:09:49.845 [2024-12-07T22:41:04.611Z] =================================================================================================================== 00:09:49.845 [2024-12-07T22:41:04.611Z] Total : 8762.19 34.23 0.00 0.00 116382.30 16086.11 87222.46 00:09:49.845 { 00:09:49.845 "results": [ 00:09:49.845 { 00:09:49.845 "job": "NVMe0n1", 00:09:49.845 "core_mask": "0x1", 00:09:49.845 "workload": "verify", 00:09:49.845 "status": "finished", 00:09:49.845 "verify_range": { 00:09:49.845 "start": 0, 00:09:49.845 "length": 16384 00:09:49.845 }, 00:09:49.845 "queue_depth": 1024, 00:09:49.845 "io_size": 4096, 00:09:49.845 "runtime": 10.069855, 00:09:49.845 "iops": 8762.191709811115, 00:09:49.845 "mibps": 34.22731136644967, 00:09:49.845 "io_failed": 0, 00:09:49.845 "io_timeout": 0, 00:09:49.845 "avg_latency_us": 116382.2954812307, 00:09:49.845 "min_latency_us": 16086.10909090909, 00:09:49.845 "max_latency_us": 87222.45818181818 00:09:49.845 } 00:09:49.845 ], 00:09:49.845 "core_count": 1 00:09:49.845 } 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 76319 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76319 ']' 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76319 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76319 00:09:49.845 killing process with pid 76319 00:09:49.845 Received shutdown signal, test time was about 10.000000 seconds 00:09:49.845 00:09:49.845 Latency(us) 00:09:49.845 [2024-12-07T22:41:04.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.845 [2024-12-07T22:41:04.611Z] =================================================================================================================== 00:09:49.845 [2024-12-07T22:41:04.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:49.845 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76319' 00:09:49.846 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76319 00:09:49.846 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76319 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:50.104 rmmod nvme_tcp 00:09:50.104 rmmod nvme_fabrics 00:09:50.104 rmmod nvme_keyring 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 76294 ']' 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 76294 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76294 ']' 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76294 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76294 00:09:50.104 killing process with pid 76294 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76294' 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76294 00:09:50.104 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76294 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:50.363 22:41:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:50.363 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:50.363 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:50.363 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:50.363 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:50.363 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:50.364 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.624 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.624 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:50.624 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.624 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:50.625 00:09:50.625 real 0m12.499s 00:09:50.625 user 0m21.310s 00:09:50.625 sys 0m2.014s 00:09:50.625 ************************************ 00:09:50.625 END TEST nvmf_queue_depth 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:50.625 ************************************ 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:50.625 ************************************ 00:09:50.625 START TEST nvmf_target_multipath 00:09:50.625 ************************************ 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:50.625 * Looking for test storage... 00:09:50.625 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.625 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:50.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.883 --rc genhtml_branch_coverage=1 00:09:50.883 --rc genhtml_function_coverage=1 00:09:50.883 --rc genhtml_legend=1 00:09:50.883 --rc geninfo_all_blocks=1 00:09:50.883 --rc geninfo_unexecuted_blocks=1 00:09:50.883 00:09:50.883 ' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:50.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.883 --rc genhtml_branch_coverage=1 00:09:50.883 --rc genhtml_function_coverage=1 00:09:50.883 --rc genhtml_legend=1 00:09:50.883 --rc geninfo_all_blocks=1 00:09:50.883 --rc geninfo_unexecuted_blocks=1 00:09:50.883 00:09:50.883 ' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:50.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.883 --rc genhtml_branch_coverage=1 00:09:50.883 --rc genhtml_function_coverage=1 00:09:50.883 --rc genhtml_legend=1 00:09:50.883 --rc geninfo_all_blocks=1 00:09:50.883 --rc geninfo_unexecuted_blocks=1 00:09:50.883 00:09:50.883 ' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:50.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.883 --rc genhtml_branch_coverage=1 00:09:50.883 --rc genhtml_function_coverage=1 00:09:50.883 --rc genhtml_legend=1 00:09:50.883 --rc geninfo_all_blocks=1 00:09:50.883 --rc geninfo_unexecuted_blocks=1 00:09:50.883 00:09:50.883 ' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.883 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:50.884 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:50.884 Cannot find device "nvmf_init_br" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:50.884 Cannot find device "nvmf_init_br2" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:50.884 Cannot find device "nvmf_tgt_br" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:50.884 Cannot find device "nvmf_tgt_br2" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:50.884 Cannot find device "nvmf_init_br" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:50.884 Cannot find device "nvmf_init_br2" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:50.884 Cannot find device "nvmf_tgt_br" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:50.884 Cannot find device "nvmf_tgt_br2" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:50.884 Cannot find device "nvmf_br" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:50.884 Cannot find device "nvmf_init_if" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:50.884 Cannot find device "nvmf_init_if2" 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:50.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:50.884 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:50.884 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:51.141 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:51.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:51.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:09:51.142 00:09:51.142 --- 10.0.0.3 ping statistics --- 00:09:51.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.142 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:51.142 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:51.142 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:09:51.142 00:09:51.142 --- 10.0.0.4 ping statistics --- 00:09:51.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.142 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:51.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:51.142 00:09:51.142 --- 10.0.0.1 ping statistics --- 00:09:51.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.142 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:51.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:09:51.142 00:09:51.142 --- 10.0.0.2 ping statistics --- 00:09:51.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.142 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=76683 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 76683 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 76683 ']' 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.142 22:41:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:51.400 [2024-12-07 22:41:05.937128] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:09:51.400 [2024-12-07 22:41:05.937228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.400 [2024-12-07 22:41:06.080720] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.400 [2024-12-07 22:41:06.126073] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.400 [2024-12-07 22:41:06.126145] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.400 [2024-12-07 22:41:06.126161] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.400 [2024-12-07 22:41:06.126171] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.400 [2024-12-07 22:41:06.126180] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.400 [2024-12-07 22:41:06.126349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.400 [2024-12-07 22:41:06.128921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.400 [2024-12-07 22:41:06.129064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.400 [2024-12-07 22:41:06.129157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.400 [2024-12-07 22:41:06.163034] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.658 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.658 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:09:51.658 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:51.658 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.658 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:51.658 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.658 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.916 [2024-12-07 22:41:06.558443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.916 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:52.173 Malloc0 00:09:52.173 22:41:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:52.430 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.687 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:52.945 [2024-12-07 22:41:07.617444] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:52.945 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:53.203 [2024-12-07 22:41:07.861624] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:53.203 22:41:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:53.461 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:53.461 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:53.461 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:09:53.461 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:53.461 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:53.461 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=76771 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:09:55.995 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:09:55.995 [global] 00:09:55.995 thread=1 00:09:55.995 invalidate=1 00:09:55.995 rw=randrw 00:09:55.995 time_based=1 00:09:55.995 runtime=6 00:09:55.995 ioengine=libaio 00:09:55.995 direct=1 00:09:55.995 bs=4096 00:09:55.995 iodepth=128 00:09:55.995 norandommap=0 00:09:55.995 numjobs=1 00:09:55.995 00:09:55.995 verify_dump=1 00:09:55.995 verify_backlog=512 00:09:55.995 verify_state_save=0 00:09:55.995 do_verify=1 00:09:55.995 verify=crc32c-intel 00:09:55.995 [job0] 00:09:55.995 filename=/dev/nvme0n1 00:09:55.995 Could not set queue depth (nvme0n1) 00:09:55.995 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:55.995 fio-3.35 00:09:55.995 Starting 1 thread 00:09:56.564 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:09:56.823 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:57.083 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:09:57.342 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:09:57.910 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 76771 00:10:02.101 00:10:02.101 job0: (groupid=0, jobs=1): err= 0: pid=76792: Sat Dec 7 22:41:16 2024 00:10:02.101 read: IOPS=10.1k, BW=39.6MiB/s (41.5MB/s)(238MiB/6007msec) 00:10:02.101 slat (usec): min=3, max=7760, avg=58.93, stdev=236.65 00:10:02.101 clat (usec): min=1805, max=15843, avg=8660.69, stdev=1557.36 00:10:02.101 lat (usec): min=1815, max=15870, avg=8719.62, stdev=1561.83 00:10:02.101 clat percentiles (usec): 00:10:02.101 | 1.00th=[ 4490], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7832], 00:10:02.101 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:10:02.101 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[10028], 95.00th=[12256], 00:10:02.101 | 99.00th=[13566], 99.50th=[14091], 99.90th=[14746], 99.95th=[15008], 00:10:02.101 | 99.99th=[15401] 00:10:02.101 bw ( KiB/s): min= 8432, max=25152, per=50.77%, avg=20564.36, stdev=5330.62, samples=11 00:10:02.101 iops : min= 2108, max= 6288, avg=5141.09, stdev=1332.65, samples=11 00:10:02.101 write: IOPS=5832, BW=22.8MiB/s (23.9MB/s)(122MiB/5348msec); 0 zone resets 00:10:02.101 slat (usec): min=17, max=3052, avg=66.56, stdev=162.76 00:10:02.101 clat (usec): min=1255, max=15135, avg=7538.06, stdev=1376.29 00:10:02.101 lat (usec): min=1281, max=15161, avg=7604.62, stdev=1381.11 00:10:02.101 clat percentiles (usec): 00:10:02.101 | 1.00th=[ 3490], 5.00th=[ 4359], 10.00th=[ 5669], 20.00th=[ 6980], 00:10:02.101 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7767], 60.00th=[ 7963], 00:10:02.101 | 70.00th=[ 8160], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 9110], 00:10:02.101 | 99.00th=[11469], 99.50th=[12125], 99.90th=[13435], 99.95th=[13698], 00:10:02.101 | 99.99th=[14353] 00:10:02.101 bw ( KiB/s): min= 8744, max=24680, per=88.50%, avg=20646.55, stdev=5143.45, samples=11 00:10:02.101 iops : min= 2186, max= 6170, avg=5161.64, stdev=1285.86, samples=11 00:10:02.101 lat (msec) : 2=0.03%, 4=1.29%, 10=91.05%, 20=7.62% 00:10:02.101 cpu : usr=5.66%, sys=21.35%, ctx=5307, majf=0, minf=108 00:10:02.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:02.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.101 issued rwts: total=60831,31190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.101 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.101 00:10:02.101 Run status group 0 (all jobs): 00:10:02.101 READ: bw=39.6MiB/s (41.5MB/s), 39.6MiB/s-39.6MiB/s (41.5MB/s-41.5MB/s), io=238MiB (249MB), run=6007-6007msec 00:10:02.101 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=122MiB (128MB), run=5348-5348msec 00:10:02.101 00:10:02.101 Disk stats (read/write): 00:10:02.101 nvme0n1: ios=59982/30571, merge=0/0, ticks=498286/216805, in_queue=715091, util=98.66% 00:10:02.101 22:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:02.101 22:41:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76867 00:10:02.361 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:02.619 22:41:17 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:02.619 [global] 00:10:02.619 thread=1 00:10:02.619 invalidate=1 00:10:02.619 rw=randrw 00:10:02.619 time_based=1 00:10:02.619 runtime=6 00:10:02.619 ioengine=libaio 00:10:02.619 direct=1 00:10:02.619 bs=4096 00:10:02.619 iodepth=128 00:10:02.619 norandommap=0 00:10:02.619 numjobs=1 00:10:02.619 00:10:02.619 verify_dump=1 00:10:02.619 verify_backlog=512 00:10:02.619 verify_state_save=0 00:10:02.619 do_verify=1 00:10:02.619 verify=crc32c-intel 00:10:02.619 [job0] 00:10:02.619 filename=/dev/nvme0n1 00:10:02.619 Could not set queue depth (nvme0n1) 00:10:02.619 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:02.619 fio-3.35 00:10:02.619 Starting 1 thread 00:10:03.555 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:03.813 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.072 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:04.331 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:04.589 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:04.590 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:04.590 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:04.590 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:04.590 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:04.590 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76867 00:10:08.847 00:10:08.847 job0: (groupid=0, jobs=1): err= 0: pid=76894: Sat Dec 7 22:41:23 2024 00:10:08.847 read: IOPS=11.6k, BW=45.1MiB/s (47.3MB/s)(271MiB/6002msec) 00:10:08.847 slat (usec): min=2, max=8108, avg=43.11, stdev=196.91 00:10:08.847 clat (usec): min=323, max=16448, avg=7580.85, stdev=1991.68 00:10:08.847 lat (usec): min=341, max=16471, avg=7623.96, stdev=2007.52 00:10:08.847 clat percentiles (usec): 00:10:08.847 | 1.00th=[ 3163], 5.00th=[ 4080], 10.00th=[ 4752], 20.00th=[ 5866], 00:10:08.847 | 30.00th=[ 6915], 40.00th=[ 7504], 50.00th=[ 7832], 60.00th=[ 8094], 00:10:08.847 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[ 9503], 95.00th=[11338], 00:10:08.847 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14353], 99.95th=[14615], 00:10:08.847 | 99.99th=[15008] 00:10:08.847 bw ( KiB/s): min= 9408, max=40744, per=53.63%, avg=24779.64, stdev=8198.90, samples=11 00:10:08.847 iops : min= 2352, max=10186, avg=6194.91, stdev=2049.72, samples=11 00:10:08.847 write: IOPS=6768, BW=26.4MiB/s (27.7MB/s)(145MiB/5481msec); 0 zone resets 00:10:08.847 slat (usec): min=4, max=6444, avg=53.58, stdev=142.73 00:10:08.847 clat (usec): min=1624, max=14860, avg=6375.53, stdev=1839.16 00:10:08.847 lat (usec): min=1652, max=14888, avg=6429.11, stdev=1853.39 00:10:08.847 clat percentiles (usec): 00:10:08.847 | 1.00th=[ 2671], 5.00th=[ 3294], 10.00th=[ 3720], 20.00th=[ 4359], 00:10:08.847 | 30.00th=[ 5080], 40.00th=[ 6390], 50.00th=[ 6915], 60.00th=[ 7242], 00:10:08.847 | 70.00th=[ 7570], 80.00th=[ 7832], 90.00th=[ 8291], 95.00th=[ 8717], 00:10:08.847 | 99.00th=[10814], 99.50th=[11863], 99.90th=[13566], 99.95th=[13698], 00:10:08.847 | 99.99th=[14353] 00:10:08.847 bw ( KiB/s): min= 9808, max=39952, per=91.40%, avg=24745.45, stdev=8022.66, samples=11 00:10:08.847 iops : min= 2452, max= 9988, avg=6186.36, stdev=2005.67, samples=11 00:10:08.847 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:08.847 lat (msec) : 2=0.08%, 4=7.74%, 10=86.95%, 20=5.23% 00:10:08.847 cpu : usr=6.65%, sys=22.61%, ctx=6140, majf=0, minf=78 00:10:08.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:08.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:08.847 issued rwts: total=69325,37097,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:08.847 00:10:08.847 Run status group 0 (all jobs): 00:10:08.847 READ: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=271MiB (284MB), run=6002-6002msec 00:10:08.847 WRITE: bw=26.4MiB/s (27.7MB/s), 26.4MiB/s-26.4MiB/s (27.7MB/s-27.7MB/s), io=145MiB (152MB), run=5481-5481msec 00:10:08.847 00:10:08.847 Disk stats (read/write): 00:10:08.847 nvme0n1: ios=68324/36562, merge=0/0, ticks=494124/217029, in_queue=711153, util=98.65% 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:08.847 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.105 rmmod nvme_tcp 00:10:09.105 rmmod nvme_fabrics 00:10:09.105 rmmod nvme_keyring 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 76683 ']' 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 76683 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 76683 ']' 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 76683 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:09.105 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76683 00:10:09.362 killing process with pid 76683 00:10:09.362 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:09.362 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:09.362 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76683' 00:10:09.362 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 76683 00:10:09.362 22:41:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 76683 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:09.362 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:09.363 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:09.620 00:10:09.620 real 0m19.037s 00:10:09.620 user 1m10.146s 00:10:09.620 sys 0m10.094s 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.620 ************************************ 00:10:09.620 END TEST nvmf_target_multipath 00:10:09.620 ************************************ 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.620 ************************************ 00:10:09.620 START TEST nvmf_zcopy 00:10:09.620 ************************************ 00:10:09.620 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:09.878 * Looking for test storage... 00:10:09.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.878 --rc genhtml_branch_coverage=1 00:10:09.878 --rc genhtml_function_coverage=1 00:10:09.878 --rc genhtml_legend=1 00:10:09.878 --rc geninfo_all_blocks=1 00:10:09.878 --rc geninfo_unexecuted_blocks=1 00:10:09.878 00:10:09.878 ' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.878 --rc genhtml_branch_coverage=1 00:10:09.878 --rc genhtml_function_coverage=1 00:10:09.878 --rc genhtml_legend=1 00:10:09.878 --rc geninfo_all_blocks=1 00:10:09.878 --rc geninfo_unexecuted_blocks=1 00:10:09.878 00:10:09.878 ' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.878 --rc genhtml_branch_coverage=1 00:10:09.878 --rc genhtml_function_coverage=1 00:10:09.878 --rc genhtml_legend=1 00:10:09.878 --rc geninfo_all_blocks=1 00:10:09.878 --rc geninfo_unexecuted_blocks=1 00:10:09.878 00:10:09.878 ' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:09.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.878 --rc genhtml_branch_coverage=1 00:10:09.878 --rc genhtml_function_coverage=1 00:10:09.878 --rc genhtml_legend=1 00:10:09.878 --rc geninfo_all_blocks=1 00:10:09.878 --rc geninfo_unexecuted_blocks=1 00:10:09.878 00:10:09.878 ' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.878 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:09.879 Cannot find device "nvmf_init_br" 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:09.879 Cannot find device "nvmf_init_br2" 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:09.879 Cannot find device "nvmf_tgt_br" 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:09.879 Cannot find device "nvmf_tgt_br2" 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:09.879 Cannot find device "nvmf_init_br" 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:09.879 Cannot find device "nvmf_init_br2" 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:09.879 Cannot find device "nvmf_tgt_br" 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:09.879 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:10.137 Cannot find device "nvmf_tgt_br2" 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:10.137 Cannot find device "nvmf_br" 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:10.137 Cannot find device "nvmf_init_if" 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:10.137 Cannot find device "nvmf_init_if2" 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:10.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:10.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:10.137 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:10.398 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:10.398 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:10.398 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:10.398 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:10.398 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:10:10.398 00:10:10.398 --- 10.0.0.3 ping statistics --- 00:10:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.398 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:10.398 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:10.398 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:10.398 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:10.398 00:10:10.398 --- 10.0.0.4 ping statistics --- 00:10:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.398 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:10.398 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:10.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:10.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:10.398 00:10:10.398 --- 10.0.0.1 ping statistics --- 00:10:10.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.398 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:10.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:10.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:10:10.399 00:10:10.399 --- 10.0.0.2 ping statistics --- 00:10:10.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:10.399 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=77195 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 77195 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 77195 ']' 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.399 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.399 [2024-12-07 22:41:25.013071] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:10.399 [2024-12-07 22:41:25.013418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:10.399 [2024-12-07 22:41:25.154631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.660 [2024-12-07 22:41:25.190775] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:10.660 [2024-12-07 22:41:25.190824] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:10.660 [2024-12-07 22:41:25.190833] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:10.660 [2024-12-07 22:41:25.190841] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:10.660 [2024-12-07 22:41:25.190847] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:10.660 [2024-12-07 22:41:25.190871] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.660 [2024-12-07 22:41:25.221238] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.594 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.594 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:11.594 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:11.594 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:11.594 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.594 [2024-12-07 22:41:26.037852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.594 [2024-12-07 22:41:26.054032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.594 malloc0 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:11.594 { 00:10:11.594 "params": { 00:10:11.594 "name": "Nvme$subsystem", 00:10:11.594 "trtype": "$TEST_TRANSPORT", 00:10:11.594 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.594 "adrfam": "ipv4", 00:10:11.594 "trsvcid": "$NVMF_PORT", 00:10:11.594 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.594 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.594 "hdgst": ${hdgst:-false}, 00:10:11.594 "ddgst": ${ddgst:-false} 00:10:11.594 }, 00:10:11.594 "method": "bdev_nvme_attach_controller" 00:10:11.594 } 00:10:11.594 EOF 00:10:11.594 )") 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:11.594 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:11.594 "params": { 00:10:11.594 "name": "Nvme1", 00:10:11.594 "trtype": "tcp", 00:10:11.594 "traddr": "10.0.0.3", 00:10:11.594 "adrfam": "ipv4", 00:10:11.594 "trsvcid": "4420", 00:10:11.594 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.594 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.594 "hdgst": false, 00:10:11.594 "ddgst": false 00:10:11.594 }, 00:10:11.594 "method": "bdev_nvme_attach_controller" 00:10:11.594 }' 00:10:11.594 [2024-12-07 22:41:26.158973] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:11.594 [2024-12-07 22:41:26.159062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77228 ] 00:10:11.594 [2024-12-07 22:41:26.300772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.594 [2024-12-07 22:41:26.342761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.852 [2024-12-07 22:41:26.384990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:11.852 Running I/O for 10 seconds... 00:10:13.724 6155.00 IOPS, 48.09 MiB/s [2024-12-07T22:41:29.867Z] 6228.50 IOPS, 48.66 MiB/s [2024-12-07T22:41:30.801Z] 6240.33 IOPS, 48.75 MiB/s [2024-12-07T22:41:31.737Z] 6237.00 IOPS, 48.73 MiB/s [2024-12-07T22:41:32.753Z] 6233.60 IOPS, 48.70 MiB/s [2024-12-07T22:41:33.731Z] 6217.17 IOPS, 48.57 MiB/s [2024-12-07T22:41:34.670Z] 6156.14 IOPS, 48.09 MiB/s [2024-12-07T22:41:35.606Z] 6108.25 IOPS, 47.72 MiB/s [2024-12-07T22:41:36.541Z] 6067.11 IOPS, 47.40 MiB/s [2024-12-07T22:41:36.541Z] 6098.70 IOPS, 47.65 MiB/s 00:10:21.775 Latency(us) 00:10:21.775 [2024-12-07T22:41:36.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.775 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:21.775 Verification LBA range: start 0x0 length 0x1000 00:10:21.775 Nvme1n1 : 10.01 6101.98 47.67 0.00 0.00 20910.36 1735.21 36223.53 00:10:21.775 [2024-12-07T22:41:36.541Z] =================================================================================================================== 00:10:21.775 [2024-12-07T22:41:36.541Z] Total : 6101.98 47.67 0.00 0.00 20910.36 1735.21 36223.53 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=77345 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:22.046 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:22.046 { 00:10:22.046 "params": { 00:10:22.046 "name": "Nvme$subsystem", 00:10:22.046 "trtype": "$TEST_TRANSPORT", 00:10:22.046 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:22.046 "adrfam": "ipv4", 00:10:22.046 "trsvcid": "$NVMF_PORT", 00:10:22.046 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:22.046 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:22.046 "hdgst": ${hdgst:-false}, 00:10:22.046 "ddgst": ${ddgst:-false} 00:10:22.046 }, 00:10:22.046 "method": "bdev_nvme_attach_controller" 00:10:22.046 } 00:10:22.046 EOF 00:10:22.046 )") 00:10:22.047 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:22.047 [2024-12-07 22:41:36.651126] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.651169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:22.047 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:22.047 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:22.047 "params": { 00:10:22.047 "name": "Nvme1", 00:10:22.047 "trtype": "tcp", 00:10:22.047 "traddr": "10.0.0.3", 00:10:22.047 "adrfam": "ipv4", 00:10:22.047 "trsvcid": "4420", 00:10:22.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:22.047 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:22.047 "hdgst": false, 00:10:22.047 "ddgst": false 00:10:22.047 }, 00:10:22.047 "method": "bdev_nvme_attach_controller" 00:10:22.047 }' 00:10:22.047 [2024-12-07 22:41:36.659098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.659128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.667094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.667332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.679085] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.679117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.691086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.691115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.703093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.703123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.703249] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:22.047 [2024-12-07 22:41:36.703316] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77345 ] 00:10:22.047 [2024-12-07 22:41:36.715093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.715123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.727107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.727325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.739102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.739150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.751117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.751180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.763108] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.763328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.775113] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.775141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.787122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.787149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.047 [2024-12-07 22:41:36.799152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.047 [2024-12-07 22:41:36.799213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.305 [2024-12-07 22:41:36.811147] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.305 [2024-12-07 22:41:36.811199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.305 [2024-12-07 22:41:36.823144] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.305 [2024-12-07 22:41:36.823207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.305 [2024-12-07 22:41:36.835154] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.305 [2024-12-07 22:41:36.835198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.305 [2024-12-07 22:41:36.839198] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.305 [2024-12-07 22:41:36.843148] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.305 [2024-12-07 22:41:36.843194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.305 [2024-12-07 22:41:36.855158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.855210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.863162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.863197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.875164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.875213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.875211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.306 [2024-12-07 22:41:36.883143] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.883168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.895190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.895246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.907201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.907257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.912937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.306 [2024-12-07 22:41:36.915198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.915241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.927210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.927276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.935316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.935348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.943338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.943371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.951327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.951357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.959339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.959368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.967348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.967379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.975353] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.975382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.983368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.983401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.991389] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.991421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:36.999404] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:36.999436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:37.007405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:37.007435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 Running I/O for 5 seconds... 00:10:22.306 [2024-12-07 22:41:37.015465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:37.015645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:37.032770] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:37.032972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:37.042945] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:37.043094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:37.055484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:37.055654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.306 [2024-12-07 22:41:37.066941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.306 [2024-12-07 22:41:37.067106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.078461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.078615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.092538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.092573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.101713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.101904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.115596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.115631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.124554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.124588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.136667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.136700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.146617] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.146671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.161492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.161658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.179721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.179756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.189606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.189640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.203158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.203191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.212378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.212545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.226843] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.226916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.238318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.238397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.255207] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.255378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.264383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.264418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.278596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.278829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.287548] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.287582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.302226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.302258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.312060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.312096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.564 [2024-12-07 22:41:37.325211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.564 [2024-12-07 22:41:37.325366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.339497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.339535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.355759] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.355794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.365072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.365106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.380571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.380738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.391666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.391831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.408122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.408155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.417646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.417680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.432140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.432173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.443812] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.443846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.461366] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.461534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.473048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.473085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.488514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.488703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.504746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.504810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.522982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.523045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.537272] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.537307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.545741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.545775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.557497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.557530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.567633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.567666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:22.823 [2024-12-07 22:41:37.581781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:22.823 [2024-12-07 22:41:37.581814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.593174] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.593213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.605561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.605596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.617084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.617122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.633317] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.633356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.649197] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.649264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.658291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.658494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.673651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.673817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.683030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.683064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.695415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.695448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.712370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.712537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.728634] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.728689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.738433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.738470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.750460] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.750498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.762857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.762915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.778336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.778399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.794338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.794414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.803284] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.803453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.816357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.816392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.825911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.825939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.083 [2024-12-07 22:41:37.840530] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.083 [2024-12-07 22:41:37.840569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.851553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.851589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.866146] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.866198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.882871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.882949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.892980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.893018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.907206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.907240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.918487] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.918524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.935191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.935241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.950994] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.951027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.960152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.960323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.975310] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.975459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.984860] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.984903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:37.997292] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:37.997325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 11883.00 IOPS, 92.84 MiB/s [2024-12-07T22:41:38.109Z] [2024-12-07 22:41:38.013888] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:38.013950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:38.029455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:38.029606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:38.039247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:38.039282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:38.053105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:38.053139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:38.061785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:38.061819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:38.073998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:38.074032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.343 [2024-12-07 22:41:38.089467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.343 [2024-12-07 22:41:38.089502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.109043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.109079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.119597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.119632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.131254] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.131288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.141042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.141091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.156103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.156141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.171565] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.171733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.181295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.181329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.192697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.192730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.203505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.203539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.213461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.213495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.228110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.228142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.237067] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.237100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.249370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.249404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.265830] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.265864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.278079] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.278115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.295129] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.295163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.311081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.311118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.319952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.601 [2024-12-07 22:41:38.319985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.601 [2024-12-07 22:41:38.332486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.602 [2024-12-07 22:41:38.332520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.602 [2024-12-07 22:41:38.342833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.602 [2024-12-07 22:41:38.342881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.602 [2024-12-07 22:41:38.357044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.602 [2024-12-07 22:41:38.357219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.372485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.372652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.381931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.382094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.397175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.397355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.406512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.406716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.417230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.417393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.429730] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.429924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.439505] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.439667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.453468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.453619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.467557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.467744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.484536] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.484718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.494087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.494269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.504982] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.505150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.521726] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.521918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.539257] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.539452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.549658] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.549827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.562515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.562668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.573680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.573861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.586282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.586475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.602323] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.602522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.859 [2024-12-07 22:41:38.618908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:23.859 [2024-12-07 22:41:38.619095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.629555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.629592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.640461] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.640626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.650829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.650863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.661451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.661487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.673093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.673129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.684444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.684478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.695515] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.695684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.706422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.706565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.719904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.719951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.736724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.736760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.754098] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.754137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.764432] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.764601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.779832] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.780026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.796632] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.796669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.805926] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.805994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.821150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.821185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.830481] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.830518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.842687] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.842738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.858545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.858581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.117 [2024-12-07 22:41:38.876659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.117 [2024-12-07 22:41:38.876693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.888046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.888081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.900332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.900366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.909721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.909755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.920473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.920507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.935074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.935107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.944446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.944480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.959793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.959827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.975468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.975507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:38.986051] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:38.986089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.000903] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.000951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 11895.00 IOPS, 92.93 MiB/s [2024-12-07T22:41:39.142Z] [2024-12-07 22:41:39.010970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.011004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.025320] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.025354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.034496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.034531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.047077] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.047111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.062019] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.062054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.077422] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.077457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.095964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.095998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.106076] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.106110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.120596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.120630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.376 [2024-12-07 22:41:39.129619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.376 [2024-12-07 22:41:39.129653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.144695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.144732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.160800] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.160835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.170600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.170637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.184089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.184128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.193486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.193520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.208203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.208242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.218852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.218941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.234538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.234722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.244375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.244409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.258293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.258328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.267864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.267926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.277876] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.278109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.288215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.288265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.298625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.298661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.316596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.316632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.332073] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.332107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.341075] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.341109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.353265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.353298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.364966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.365000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.381222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.381257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.635 [2024-12-07 22:41:39.397760] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.635 [2024-12-07 22:41:39.397797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.893 [2024-12-07 22:41:39.407188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.893 [2024-12-07 22:41:39.407238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.893 [2024-12-07 22:41:39.418111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.893 [2024-12-07 22:41:39.418145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.893 [2024-12-07 22:41:39.429042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.893 [2024-12-07 22:41:39.429079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.893 [2024-12-07 22:41:39.443884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.893 [2024-12-07 22:41:39.443938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.461118] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.461156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.471283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.471317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.482533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.482570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.492580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.492752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.507603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.507769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.525239] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.525289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.535358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.535392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.545275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.545308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.555531] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.555564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.570572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.570767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.587178] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.587213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.596447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.596481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.607121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.607156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.617309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.617479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.632696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.632909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:24.894 [2024-12-07 22:41:39.643134] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:24.894 [2024-12-07 22:41:39.643170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.658473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.658628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.673600] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.673772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.684071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.684105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.697931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.697963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.707635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.707802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.718159] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.718339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.733586] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.733750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.749538] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.749575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.766625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.766664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.782698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.782746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.792427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.792617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.805031] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.805087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.820474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.820507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.836964] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.837002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.845813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.845848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.858287] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.858320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.869678] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.869713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.886450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.886485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.902544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.902579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.153 [2024-12-07 22:41:39.911788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.153 [2024-12-07 22:41:39.911990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-12-07 22:41:39.927042] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.411 [2024-12-07 22:41:39.927078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-12-07 22:41:39.936433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.411 [2024-12-07 22:41:39.936467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-12-07 22:41:39.951420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.411 [2024-12-07 22:41:39.951453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-12-07 22:41:39.959967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.411 [2024-12-07 22:41:39.960001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-12-07 22:41:39.974823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.411 [2024-12-07 22:41:39.975021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-12-07 22:41:39.983474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.411 [2024-12-07 22:41:39.983508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.411 [2024-12-07 22:41:39.997120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:39.997154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.006635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.006850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 11966.67 IOPS, 93.49 MiB/s [2024-12-07T22:41:40.178Z] [2024-12-07 22:41:40.022125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.022331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.037343] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.037506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.053498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.053660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.063387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.063549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.078015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.078180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.094459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.094638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.103739] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.103929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.117635] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.117800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.126862] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.126939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.137195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.137245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.147386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.147419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.157579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.157613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.412 [2024-12-07 22:41:40.167602] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.412 [2024-12-07 22:41:40.167636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.182204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.182255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.191015] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.191049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.206172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.206206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.224557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.224723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.235069] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.235251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.245352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.245530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.255663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.255826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.270291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.270460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.280097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.280280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.294911] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.670 [2024-12-07 22:41:40.295115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.670 [2024-12-07 22:41:40.305193] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.305405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.319351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.319516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.334854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.335052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.344227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.344407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.355983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.356152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.366039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.366205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.376485] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.376647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.388132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.388296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.397781] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.397978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.411924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.412102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.671 [2024-12-07 22:41:40.423550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.671 [2024-12-07 22:41:40.423711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.440942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.441111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.456458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.456498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.474054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.474090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.483807] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.483841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.497645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.497812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.506368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.506427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.521474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.521508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.536403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.536570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.545570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.545605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.556094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.556128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.565852] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.565910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.575946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.575980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.585777] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.585810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.595527] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.595694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.606414] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.606596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.618925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.618992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.627698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.627732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.638521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.638557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.649963] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.649997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.658603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.658638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.669291] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.669457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.679382] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.929 [2024-12-07 22:41:40.679417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.929 [2024-12-07 22:41:40.690491] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.930 [2024-12-07 22:41:40.690531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.702429] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.702470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.719662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.719696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.737102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.737139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.748303] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.748479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.760138] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.760177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.771419] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.771603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.788183] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.788234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.798660] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.798914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.810938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.810988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.822367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.822415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.836332] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.836368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.851444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.851480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.867669] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.867708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.876638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.876671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.188 [2024-12-07 22:41:40.888720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.188 [2024-12-07 22:41:40.888753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-12-07 22:41:40.898363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-12-07 22:41:40.898422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-12-07 22:41:40.908555] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-12-07 22:41:40.908588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-12-07 22:41:40.918575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-12-07 22:41:40.918610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-12-07 22:41:40.933027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-12-07 22:41:40.933061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.189 [2024-12-07 22:41:40.941611] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.189 [2024-12-07 22:41:40.941644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:40.956864] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:40.957047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:40.966645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:40.966682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:40.980724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:40.980917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:40.990072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:40.990107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.004697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.004859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 12021.50 IOPS, 93.92 MiB/s [2024-12-07T22:41:41.214Z] [2024-12-07 22:41:41.019647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.019681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.029033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.029070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.041553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.041605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.052924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.052975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.068379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.068413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.083437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.083729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.092494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.092541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.107190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.107248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.122825] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.123096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.141124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.141179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.150655] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.150932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.164262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.164325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.173380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.173427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.187297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.187371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.195773] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.195812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.448 [2024-12-07 22:41:41.211188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.448 [2024-12-07 22:41:41.211252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.229390] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.229445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.239373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.239408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.253529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.253584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.265494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.265540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.282071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.282124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.298230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.298282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.309668] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.309843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.325629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.325663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.336993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.337028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.353327] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.353362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.370202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.370256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.384629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.384666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.400450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.400486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.416983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.417042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.426790] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.426824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.439410] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.439612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.451501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.451535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.707 [2024-12-07 22:41:41.466638] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.707 [2024-12-07 22:41:41.466911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.476918] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.476952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.492150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.492186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.507242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.507474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.516833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.516895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.529047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.529114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.540245] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.540312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.557279] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.557342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.573376] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.573411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.582492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.582662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.597380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.597559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.606072] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.606105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.620837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.621052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.629490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.629523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.644199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.644401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.653640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.653675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.669275] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.669309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.684117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.684151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.693152] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.693186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.704349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.704382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.716427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.716460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.966 [2024-12-07 22:41:41.725258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.966 [2024-12-07 22:41:41.725291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.225 [2024-12-07 22:41:41.738351] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.738386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.748309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.748341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.762925] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.763003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.772281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.772343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.783440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.783474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.801070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.801104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.815996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.816034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.825857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.825938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.838357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.838566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.850047] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.850084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.865623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.865789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.875444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.875488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.891124] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.891165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.901140] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.901176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.913132] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.913168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.923464] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.923494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.937582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.937629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.952352] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.952419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.969766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.969827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.226 [2024-12-07 22:41:41.979631] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.226 [2024-12-07 22:41:41.979691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:41.994523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:41.994579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 12024.40 IOPS, 93.94 MiB/s [2024-12-07T22:41:42.251Z] [2024-12-07 22:41:42.012576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.012606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.019416] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.019460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 00:10:27.485 Latency(us) 00:10:27.485 [2024-12-07T22:41:42.251Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.485 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:27.485 Nvme1n1 : 5.01 12025.91 93.95 0.00 0.00 10631.66 4349.21 20018.27 00:10:27.485 [2024-12-07T22:41:42.251Z] =================================================================================================================== 00:10:27.485 [2024-12-07T22:41:42.251Z] Total : 12025.91 93.95 0.00 0.00 10631.66 4349.21 20018.27 00:10:27.485 [2024-12-07 22:41:42.031424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.031473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.039436] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.039484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.051455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.051511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.059451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.059505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.071476] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.071532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.083457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.083514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.091467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.091518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.099446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.099494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.107438] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.107480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.115458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.115508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.123454] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.123502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.131445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.131486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.139477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.139526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.147456] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.147502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.155449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.155489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 [2024-12-07 22:41:42.163503] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.485 [2024-12-07 22:41:42.163571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.485 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (77345) - No such process 00:10:27.485 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 77345 00:10:27.485 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.485 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.485 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.485 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.486 delay0 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.486 22:41:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:27.744 [2024-12-07 22:41:42.359822] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:34.340 Initializing NVMe Controllers 00:10:34.340 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:34.340 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:34.340 Initialization complete. Launching workers. 00:10:34.340 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 94 00:10:34.340 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 381, failed to submit 33 00:10:34.340 success 255, unsuccessful 126, failed 0 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.340 rmmod nvme_tcp 00:10:34.340 rmmod nvme_fabrics 00:10:34.340 rmmod nvme_keyring 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 77195 ']' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 77195 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 77195 ']' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 77195 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77195 00:10:34.340 killing process with pid 77195 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77195' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 77195 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 77195 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:34.340 00:10:34.340 real 0m24.590s 00:10:34.340 user 0m39.855s 00:10:34.340 sys 0m6.742s 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:34.340 ************************************ 00:10:34.340 END TEST nvmf_zcopy 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:34.340 ************************************ 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.340 ************************************ 00:10:34.340 START TEST nvmf_nmic 00:10:34.340 ************************************ 00:10:34.340 22:41:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:34.340 * Looking for test storage... 00:10:34.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:34.340 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:34.340 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:34.341 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.600 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:34.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.601 --rc genhtml_branch_coverage=1 00:10:34.601 --rc genhtml_function_coverage=1 00:10:34.601 --rc genhtml_legend=1 00:10:34.601 --rc geninfo_all_blocks=1 00:10:34.601 --rc geninfo_unexecuted_blocks=1 00:10:34.601 00:10:34.601 ' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:34.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.601 --rc genhtml_branch_coverage=1 00:10:34.601 --rc genhtml_function_coverage=1 00:10:34.601 --rc genhtml_legend=1 00:10:34.601 --rc geninfo_all_blocks=1 00:10:34.601 --rc geninfo_unexecuted_blocks=1 00:10:34.601 00:10:34.601 ' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:34.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.601 --rc genhtml_branch_coverage=1 00:10:34.601 --rc genhtml_function_coverage=1 00:10:34.601 --rc genhtml_legend=1 00:10:34.601 --rc geninfo_all_blocks=1 00:10:34.601 --rc geninfo_unexecuted_blocks=1 00:10:34.601 00:10:34.601 ' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:34.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.601 --rc genhtml_branch_coverage=1 00:10:34.601 --rc genhtml_function_coverage=1 00:10:34.601 --rc genhtml_legend=1 00:10:34.601 --rc geninfo_all_blocks=1 00:10:34.601 --rc geninfo_unexecuted_blocks=1 00:10:34.601 00:10:34.601 ' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.601 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.601 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:34.602 Cannot find device "nvmf_init_br" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:34.602 Cannot find device "nvmf_init_br2" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:34.602 Cannot find device "nvmf_tgt_br" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:34.602 Cannot find device "nvmf_tgt_br2" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:34.602 Cannot find device "nvmf_init_br" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:34.602 Cannot find device "nvmf_init_br2" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:34.602 Cannot find device "nvmf_tgt_br" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:34.602 Cannot find device "nvmf_tgt_br2" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:34.602 Cannot find device "nvmf_br" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:34.602 Cannot find device "nvmf_init_if" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:34.602 Cannot find device "nvmf_init_if2" 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:34.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:34.602 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:34.602 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:34.861 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:34.862 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:34.862 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:10:34.862 00:10:34.862 --- 10.0.0.3 ping statistics --- 00:10:34.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.862 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:34.862 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:34.862 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:10:34.862 00:10:34.862 --- 10.0.0.4 ping statistics --- 00:10:34.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.862 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:34.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:34.862 00:10:34.862 --- 10.0.0.1 ping statistics --- 00:10:34.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.862 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:34.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:10:34.862 00:10:34.862 --- 10.0.0.2 ping statistics --- 00:10:34.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.862 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=77728 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 77728 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 77728 ']' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:34.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:34.862 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.121 [2024-12-07 22:41:49.648896] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:35.121 [2024-12-07 22:41:49.649020] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.121 [2024-12-07 22:41:49.793893] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.121 [2024-12-07 22:41:49.831291] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.121 [2024-12-07 22:41:49.831341] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.121 [2024-12-07 22:41:49.831353] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.121 [2024-12-07 22:41:49.831361] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.121 [2024-12-07 22:41:49.831368] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.121 [2024-12-07 22:41:49.831526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.121 [2024-12-07 22:41:49.831670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.121 [2024-12-07 22:41:49.832311] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.121 [2024-12-07 22:41:49.832321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.121 [2024-12-07 22:41:49.862355] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.379 [2024-12-07 22:41:49.949776] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.379 Malloc0 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.379 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.380 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:35.380 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.380 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 [2024-12-07 22:41:49.996302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:35.380 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.380 test case1: single bdev can't be used in multiple subsystems 00:10:35.380 22:41:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 [2024-12-07 22:41:50.020160] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:35.380 [2024-12-07 22:41:50.020207] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:35.380 [2024-12-07 22:41:50.020220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.380 request: 00:10:35.380 { 00:10:35.380 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:35.380 "namespace": { 00:10:35.380 "bdev_name": "Malloc0", 00:10:35.380 "no_auto_visible": false 00:10:35.380 }, 00:10:35.380 "method": "nvmf_subsystem_add_ns", 00:10:35.380 "req_id": 1 00:10:35.380 } 00:10:35.380 Got JSON-RPC error response 00:10:35.380 response: 00:10:35.380 { 00:10:35.380 "code": -32602, 00:10:35.380 "message": "Invalid parameters" 00:10:35.380 } 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:35.380 Adding namespace failed - expected result. 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:35.380 test case2: host connect to nvmf target in multiple paths 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:35.380 [2024-12-07 22:41:50.032337] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.380 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:35.639 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:35.639 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:35.639 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:35.639 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:35.639 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:35.639 22:41:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:38.172 22:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:38.172 22:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:38.172 22:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.172 22:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:38.172 22:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.172 22:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:38.172 22:41:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:38.172 [global] 00:10:38.172 thread=1 00:10:38.172 invalidate=1 00:10:38.172 rw=write 00:10:38.172 time_based=1 00:10:38.172 runtime=1 00:10:38.172 ioengine=libaio 00:10:38.172 direct=1 00:10:38.172 bs=4096 00:10:38.172 iodepth=1 00:10:38.172 norandommap=0 00:10:38.172 numjobs=1 00:10:38.172 00:10:38.172 verify_dump=1 00:10:38.172 verify_backlog=512 00:10:38.172 verify_state_save=0 00:10:38.172 do_verify=1 00:10:38.172 verify=crc32c-intel 00:10:38.172 [job0] 00:10:38.172 filename=/dev/nvme0n1 00:10:38.172 Could not set queue depth (nvme0n1) 00:10:38.172 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.172 fio-3.35 00:10:38.172 Starting 1 thread 00:10:39.110 00:10:39.110 job0: (groupid=0, jobs=1): err= 0: pid=77807: Sat Dec 7 22:41:53 2024 00:10:39.110 read: IOPS=2951, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:10:39.110 slat (nsec): min=11958, max=67039, avg=15225.87, stdev=5014.16 00:10:39.110 clat (usec): min=129, max=303, avg=181.88, stdev=24.38 00:10:39.110 lat (usec): min=144, max=319, avg=197.11, stdev=25.19 00:10:39.110 clat percentiles (usec): 00:10:39.110 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 161], 00:10:39.110 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 180], 60.00th=[ 186], 00:10:39.110 | 70.00th=[ 192], 80.00th=[ 202], 90.00th=[ 217], 95.00th=[ 227], 00:10:39.110 | 99.00th=[ 249], 99.50th=[ 255], 99.90th=[ 285], 99.95th=[ 297], 00:10:39.110 | 99.99th=[ 306] 00:10:39.110 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:39.110 slat (usec): min=15, max=148, avg=22.88, stdev= 7.10 00:10:39.110 clat (usec): min=76, max=628, avg=109.31, stdev=20.33 00:10:39.110 lat (usec): min=94, max=648, avg=132.19, stdev=22.10 00:10:39.110 clat percentiles (usec): 00:10:39.110 | 1.00th=[ 82], 5.00th=[ 87], 10.00th=[ 91], 20.00th=[ 95], 00:10:39.110 | 30.00th=[ 98], 40.00th=[ 101], 50.00th=[ 105], 60.00th=[ 111], 00:10:39.110 | 70.00th=[ 117], 80.00th=[ 123], 90.00th=[ 135], 95.00th=[ 143], 00:10:39.110 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 206], 99.95th=[ 297], 00:10:39.110 | 99.99th=[ 627] 00:10:39.110 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:10:39.110 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:39.110 lat (usec) : 100=18.15%, 250=81.40%, 500=0.43%, 750=0.02% 00:10:39.110 cpu : usr=2.50%, sys=8.90%, ctx=6027, majf=0, minf=5 00:10:39.110 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:39.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.110 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:39.110 issued rwts: total=2954,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:39.110 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:39.110 00:10:39.110 Run status group 0 (all jobs): 00:10:39.110 READ: bw=11.5MiB/s (12.1MB/s), 11.5MiB/s-11.5MiB/s (12.1MB/s-12.1MB/s), io=11.5MiB (12.1MB), run=1001-1001msec 00:10:39.110 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:39.110 00:10:39.110 Disk stats (read/write): 00:10:39.110 nvme0n1: ios=2610/2851, merge=0/0, ticks=520/368, in_queue=888, util=91.18% 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:39.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:39.110 rmmod nvme_tcp 00:10:39.110 rmmod nvme_fabrics 00:10:39.110 rmmod nvme_keyring 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 77728 ']' 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 77728 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 77728 ']' 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 77728 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77728 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77728' 00:10:39.110 killing process with pid 77728 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 77728 00:10:39.110 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 77728 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:39.370 22:41:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:39.370 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:39.628 00:10:39.628 real 0m5.246s 00:10:39.628 user 0m15.388s 00:10:39.628 sys 0m2.269s 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.628 ************************************ 00:10:39.628 END TEST nvmf_nmic 00:10:39.628 ************************************ 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.628 ************************************ 00:10:39.628 START TEST nvmf_fio_target 00:10:39.628 ************************************ 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:39.628 * Looking for test storage... 00:10:39.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:39.628 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:39.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.887 --rc genhtml_branch_coverage=1 00:10:39.887 --rc genhtml_function_coverage=1 00:10:39.887 --rc genhtml_legend=1 00:10:39.887 --rc geninfo_all_blocks=1 00:10:39.887 --rc geninfo_unexecuted_blocks=1 00:10:39.887 00:10:39.887 ' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:39.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.887 --rc genhtml_branch_coverage=1 00:10:39.887 --rc genhtml_function_coverage=1 00:10:39.887 --rc genhtml_legend=1 00:10:39.887 --rc geninfo_all_blocks=1 00:10:39.887 --rc geninfo_unexecuted_blocks=1 00:10:39.887 00:10:39.887 ' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:39.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.887 --rc genhtml_branch_coverage=1 00:10:39.887 --rc genhtml_function_coverage=1 00:10:39.887 --rc genhtml_legend=1 00:10:39.887 --rc geninfo_all_blocks=1 00:10:39.887 --rc geninfo_unexecuted_blocks=1 00:10:39.887 00:10:39.887 ' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:39.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:39.887 --rc genhtml_branch_coverage=1 00:10:39.887 --rc genhtml_function_coverage=1 00:10:39.887 --rc genhtml_legend=1 00:10:39.887 --rc geninfo_all_blocks=1 00:10:39.887 --rc geninfo_unexecuted_blocks=1 00:10:39.887 00:10:39.887 ' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.887 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:39.888 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:39.888 Cannot find device "nvmf_init_br" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:39.888 Cannot find device "nvmf_init_br2" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:39.888 Cannot find device "nvmf_tgt_br" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:39.888 Cannot find device "nvmf_tgt_br2" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:39.888 Cannot find device "nvmf_init_br" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:39.888 Cannot find device "nvmf_init_br2" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:39.888 Cannot find device "nvmf_tgt_br" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:39.888 Cannot find device "nvmf_tgt_br2" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:39.888 Cannot find device "nvmf_br" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:39.888 Cannot find device "nvmf_init_if" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:39.888 Cannot find device "nvmf_init_if2" 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:39.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:39.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:39.888 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:40.147 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:40.147 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:40.147 00:10:40.147 --- 10.0.0.3 ping statistics --- 00:10:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.147 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:40.147 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:40.147 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:10:40.147 00:10:40.147 --- 10.0.0.4 ping statistics --- 00:10:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.147 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:40.147 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.147 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:40.147 00:10:40.147 --- 10.0.0.1 ping statistics --- 00:10:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.147 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:40.147 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.147 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:10:40.147 00:10:40.147 --- 10.0.0.2 ping statistics --- 00:10:40.147 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.147 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.147 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=78041 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 78041 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 78041 ']' 00:10:40.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:40.148 22:41:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.406 [2024-12-07 22:41:54.944783] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:40.406 [2024-12-07 22:41:54.945139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.406 [2024-12-07 22:41:55.085524] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.406 [2024-12-07 22:41:55.121629] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.406 [2024-12-07 22:41:55.121904] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.406 [2024-12-07 22:41:55.121926] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.406 [2024-12-07 22:41:55.121935] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.406 [2024-12-07 22:41:55.121942] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.406 [2024-12-07 22:41:55.122013] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.406 [2024-12-07 22:41:55.122145] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.406 [2024-12-07 22:41:55.122687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.406 [2024-12-07 22:41:55.122702] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.406 [2024-12-07 22:41:55.152478] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:40.664 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:40.664 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:40.664 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:40.664 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:40.664 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.664 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.664 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:40.921 [2024-12-07 22:41:55.530031] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.921 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.178 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:41.179 22:41:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.744 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:41.744 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.002 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:42.002 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.260 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:42.260 22:41:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:42.518 22:41:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:42.776 22:41:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:42.776 22:41:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.034 22:41:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:43.034 22:41:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:43.292 22:41:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:43.292 22:41:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:43.550 22:41:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:43.808 22:41:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:43.808 22:41:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.065 22:41:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:44.065 22:41:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:44.323 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:44.580 [2024-12-07 22:41:59.270129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:44.581 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:44.838 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:45.096 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:45.353 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:45.353 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:45.353 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.353 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:45.353 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:45.353 22:41:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:47.253 22:42:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:47.253 22:42:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:47.253 22:42:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:47.253 22:42:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:47.253 22:42:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:47.253 22:42:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:47.253 22:42:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:47.253 [global] 00:10:47.253 thread=1 00:10:47.253 invalidate=1 00:10:47.253 rw=write 00:10:47.253 time_based=1 00:10:47.253 runtime=1 00:10:47.253 ioengine=libaio 00:10:47.253 direct=1 00:10:47.253 bs=4096 00:10:47.253 iodepth=1 00:10:47.253 norandommap=0 00:10:47.253 numjobs=1 00:10:47.253 00:10:47.253 verify_dump=1 00:10:47.253 verify_backlog=512 00:10:47.253 verify_state_save=0 00:10:47.253 do_verify=1 00:10:47.253 verify=crc32c-intel 00:10:47.253 [job0] 00:10:47.253 filename=/dev/nvme0n1 00:10:47.253 [job1] 00:10:47.253 filename=/dev/nvme0n2 00:10:47.253 [job2] 00:10:47.253 filename=/dev/nvme0n3 00:10:47.253 [job3] 00:10:47.253 filename=/dev/nvme0n4 00:10:47.511 Could not set queue depth (nvme0n1) 00:10:47.511 Could not set queue depth (nvme0n2) 00:10:47.511 Could not set queue depth (nvme0n3) 00:10:47.511 Could not set queue depth (nvme0n4) 00:10:47.511 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.511 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.511 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.511 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:47.511 fio-3.35 00:10:47.511 Starting 4 threads 00:10:48.885 00:10:48.885 job0: (groupid=0, jobs=1): err= 0: pid=78219: Sat Dec 7 22:42:03 2024 00:10:48.885 read: IOPS=1754, BW=7017KiB/s (7185kB/s)(7024KiB/1001msec) 00:10:48.885 slat (nsec): min=14651, max=70973, avg=20798.85, stdev=7277.10 00:10:48.885 clat (usec): min=171, max=1075, avg=299.87, stdev=93.09 00:10:48.885 lat (usec): min=188, max=1126, avg=320.67, stdev=97.33 00:10:48.885 clat percentiles (usec): 00:10:48.885 | 1.00th=[ 182], 5.00th=[ 198], 10.00th=[ 233], 20.00th=[ 245], 00:10:48.885 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 273], 00:10:48.885 | 70.00th=[ 281], 80.00th=[ 371], 90.00th=[ 465], 95.00th=[ 486], 00:10:48.885 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 963], 99.95th=[ 1074], 00:10:48.885 | 99.99th=[ 1074] 00:10:48.885 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:48.885 slat (nsec): min=18405, max=97432, avg=26716.81, stdev=7122.91 00:10:48.885 clat (usec): min=99, max=830, avg=182.10, stdev=51.25 00:10:48.885 lat (usec): min=124, max=853, avg=208.82, stdev=54.75 00:10:48.885 clat percentiles (usec): 00:10:48.885 | 1.00th=[ 106], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 133], 00:10:48.885 | 30.00th=[ 149], 40.00th=[ 165], 50.00th=[ 180], 60.00th=[ 190], 00:10:48.885 | 70.00th=[ 200], 80.00th=[ 215], 90.00th=[ 260], 95.00th=[ 281], 00:10:48.885 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 347], 99.95th=[ 351], 00:10:48.885 | 99.99th=[ 832] 00:10:48.885 bw ( KiB/s): min= 8192, max= 8192, per=20.86%, avg=8192.00, stdev= 0.00, samples=1 00:10:48.885 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:48.885 lat (usec) : 100=0.03%, 250=59.62%, 500=39.09%, 750=1.16%, 1000=0.08% 00:10:48.885 lat (msec) : 2=0.03% 00:10:48.885 cpu : usr=2.10%, sys=7.10%, ctx=3804, majf=0, minf=9 00:10:48.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 issued rwts: total=1756,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.886 job1: (groupid=0, jobs=1): err= 0: pid=78220: Sat Dec 7 22:42:03 2024 00:10:48.886 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:48.886 slat (nsec): min=14222, max=71078, avg=21071.43, stdev=6409.30 00:10:48.886 clat (usec): min=217, max=1015, avg=334.64, stdev=77.95 00:10:48.886 lat (usec): min=232, max=1052, avg=355.71, stdev=81.20 00:10:48.886 clat percentiles (usec): 00:10:48.886 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 265], 00:10:48.886 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 310], 60.00th=[ 355], 00:10:48.886 | 70.00th=[ 375], 80.00th=[ 404], 90.00th=[ 457], 95.00th=[ 478], 00:10:48.886 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 523], 99.95th=[ 1012], 00:10:48.886 | 99.99th=[ 1012] 00:10:48.886 write: IOPS=1968, BW=7872KiB/s (8061kB/s)(7880KiB/1001msec); 0 zone resets 00:10:48.886 slat (usec): min=21, max=315, avg=31.41, stdev=12.34 00:10:48.886 clat (usec): min=101, max=841, avg=194.29, stdev=71.64 00:10:48.886 lat (usec): min=128, max=863, avg=225.70, stdev=79.21 00:10:48.886 clat percentiles (usec): 00:10:48.886 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 123], 20.00th=[ 130], 00:10:48.886 | 30.00th=[ 139], 40.00th=[ 159], 50.00th=[ 178], 60.00th=[ 202], 00:10:48.886 | 70.00th=[ 229], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[ 314], 00:10:48.886 | 99.00th=[ 437], 99.50th=[ 449], 99.90th=[ 498], 99.95th=[ 840], 00:10:48.886 | 99.99th=[ 840] 00:10:48.886 bw ( KiB/s): min= 8192, max= 8192, per=20.86%, avg=8192.00, stdev= 0.00, samples=1 00:10:48.886 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:48.886 lat (usec) : 250=47.26%, 500=52.08%, 750=0.60%, 1000=0.03% 00:10:48.886 lat (msec) : 2=0.03% 00:10:48.886 cpu : usr=2.60%, sys=6.80%, ctx=3523, majf=0, minf=11 00:10:48.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 issued rwts: total=1536,1970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.886 job2: (groupid=0, jobs=1): err= 0: pid=78222: Sat Dec 7 22:42:03 2024 00:10:48.886 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:48.886 slat (nsec): min=12616, max=33049, avg=14552.31, stdev=2036.14 00:10:48.886 clat (usec): min=156, max=446, avg=183.47, stdev=12.94 00:10:48.886 lat (usec): min=171, max=460, avg=198.02, stdev=13.40 00:10:48.886 clat percentiles (usec): 00:10:48.886 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:10:48.886 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:10:48.886 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 198], 95.00th=[ 204], 00:10:48.886 | 99.00th=[ 217], 99.50th=[ 223], 99.90th=[ 265], 99.95th=[ 265], 00:10:48.886 | 99.99th=[ 445] 00:10:48.886 write: IOPS=2993, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:10:48.886 slat (nsec): min=15942, max=88564, avg=22119.75, stdev=3578.67 00:10:48.886 clat (usec): min=109, max=619, avg=139.21, stdev=15.98 00:10:48.886 lat (usec): min=129, max=642, avg=161.33, stdev=16.75 00:10:48.886 clat percentiles (usec): 00:10:48.886 | 1.00th=[ 115], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 129], 00:10:48.886 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:10:48.886 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 159], 00:10:48.886 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 273], 99.95th=[ 400], 00:10:48.886 | 99.99th=[ 619] 00:10:48.886 bw ( KiB/s): min=12288, max=12288, per=31.29%, avg=12288.00, stdev= 0.00, samples=1 00:10:48.886 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:48.886 lat (usec) : 250=99.84%, 500=0.14%, 750=0.02% 00:10:48.886 cpu : usr=2.50%, sys=7.80%, ctx=5556, majf=0, minf=11 00:10:48.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 issued rwts: total=2560,2996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.886 job3: (groupid=0, jobs=1): err= 0: pid=78226: Sat Dec 7 22:42:03 2024 00:10:48.886 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:48.886 slat (nsec): min=12545, max=92040, avg=17561.49, stdev=5412.10 00:10:48.886 clat (usec): min=153, max=1586, avg=186.11, stdev=30.77 00:10:48.886 lat (usec): min=167, max=1600, avg=203.68, stdev=31.56 00:10:48.886 clat percentiles (usec): 00:10:48.886 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:10:48.886 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 188], 00:10:48.886 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:10:48.886 | 99.00th=[ 225], 99.50th=[ 229], 99.90th=[ 249], 99.95th=[ 262], 00:10:48.886 | 99.99th=[ 1582] 00:10:48.886 write: IOPS=2811, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:10:48.886 slat (nsec): min=19350, max=87252, avg=26598.35, stdev=7992.87 00:10:48.886 clat (usec): min=109, max=1842, avg=139.34, stdev=43.93 00:10:48.886 lat (usec): min=129, max=1878, avg=165.94, stdev=44.84 00:10:48.886 clat percentiles (usec): 00:10:48.886 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 129], 00:10:48.886 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:10:48.886 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 159], 00:10:48.886 | 99.00th=[ 169], 99.50th=[ 178], 99.90th=[ 515], 99.95th=[ 1532], 00:10:48.886 | 99.99th=[ 1844] 00:10:48.886 bw ( KiB/s): min=12288, max=12288, per=31.29%, avg=12288.00, stdev= 0.00, samples=1 00:10:48.886 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:48.886 lat (usec) : 250=99.87%, 500=0.06%, 750=0.02% 00:10:48.886 lat (msec) : 2=0.06% 00:10:48.886 cpu : usr=2.30%, sys=9.80%, ctx=5375, majf=0, minf=6 00:10:48.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.886 issued rwts: total=2560,2814,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.886 00:10:48.886 Run status group 0 (all jobs): 00:10:48.886 READ: bw=32.8MiB/s (34.4MB/s), 6138KiB/s-9.99MiB/s (6285kB/s-10.5MB/s), io=32.9MiB (34.5MB), run=1001-1001msec 00:10:48.886 WRITE: bw=38.4MiB/s (40.2MB/s), 7872KiB/s-11.7MiB/s (8061kB/s-12.3MB/s), io=38.4MiB (40.3MB), run=1001-1001msec 00:10:48.886 00:10:48.886 Disk stats (read/write): 00:10:48.886 nvme0n1: ios=1586/1707, merge=0/0, ticks=459/331, in_queue=790, util=86.47% 00:10:48.886 nvme0n2: ios=1427/1536, merge=0/0, ticks=510/333, in_queue=843, util=88.62% 00:10:48.886 nvme0n3: ios=2148/2560, merge=0/0, ticks=403/381, in_queue=784, util=88.82% 00:10:48.886 nvme0n4: ios=2048/2539, merge=0/0, ticks=388/382, in_queue=770, util=89.57% 00:10:48.886 22:42:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:48.886 [global] 00:10:48.886 thread=1 00:10:48.886 invalidate=1 00:10:48.886 rw=randwrite 00:10:48.886 time_based=1 00:10:48.886 runtime=1 00:10:48.886 ioengine=libaio 00:10:48.886 direct=1 00:10:48.886 bs=4096 00:10:48.886 iodepth=1 00:10:48.886 norandommap=0 00:10:48.886 numjobs=1 00:10:48.886 00:10:48.886 verify_dump=1 00:10:48.886 verify_backlog=512 00:10:48.886 verify_state_save=0 00:10:48.886 do_verify=1 00:10:48.886 verify=crc32c-intel 00:10:48.886 [job0] 00:10:48.886 filename=/dev/nvme0n1 00:10:48.886 [job1] 00:10:48.886 filename=/dev/nvme0n2 00:10:48.886 [job2] 00:10:48.886 filename=/dev/nvme0n3 00:10:48.886 [job3] 00:10:48.886 filename=/dev/nvme0n4 00:10:48.886 Could not set queue depth (nvme0n1) 00:10:48.886 Could not set queue depth (nvme0n2) 00:10:48.886 Could not set queue depth (nvme0n3) 00:10:48.886 Could not set queue depth (nvme0n4) 00:10:48.886 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.886 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.886 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.886 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.886 fio-3.35 00:10:48.886 Starting 4 threads 00:10:50.262 00:10:50.262 job0: (groupid=0, jobs=1): err= 0: pid=78286: Sat Dec 7 22:42:04 2024 00:10:50.262 read: IOPS=2845, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec) 00:10:50.262 slat (nsec): min=11417, max=41380, avg=14011.15, stdev=3079.54 00:10:50.262 clat (usec): min=137, max=664, avg=170.23, stdev=15.68 00:10:50.262 lat (usec): min=149, max=678, avg=184.24, stdev=16.07 00:10:50.262 clat percentiles (usec): 00:10:50.262 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 159], 00:10:50.262 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:50.262 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 192], 00:10:50.262 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 229], 00:10:50.262 | 99.99th=[ 668] 00:10:50.262 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:50.262 slat (nsec): min=14271, max=65995, avg=20858.64, stdev=3472.41 00:10:50.262 clat (usec): min=99, max=609, avg=130.40, stdev=17.50 00:10:50.262 lat (usec): min=119, max=638, avg=151.26, stdev=17.95 00:10:50.262 clat percentiles (usec): 00:10:50.262 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 122], 00:10:50.262 | 30.00th=[ 125], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 133], 00:10:50.262 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:10:50.262 | 99.00th=[ 163], 99.50th=[ 172], 99.90th=[ 273], 99.95th=[ 529], 00:10:50.262 | 99.99th=[ 611] 00:10:50.262 bw ( KiB/s): min=12288, max=12288, per=25.42%, avg=12288.00, stdev= 0.00, samples=1 00:10:50.262 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:50.263 lat (usec) : 100=0.02%, 250=99.88%, 500=0.05%, 750=0.05% 00:10:50.263 cpu : usr=2.40%, sys=8.20%, ctx=5920, majf=0, minf=13 00:10:50.263 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 issued rwts: total=2848,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.263 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.263 job1: (groupid=0, jobs=1): err= 0: pid=78287: Sat Dec 7 22:42:04 2024 00:10:50.263 read: IOPS=2811, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec) 00:10:50.263 slat (nsec): min=10880, max=38902, avg=13288.40, stdev=2794.52 00:10:50.263 clat (usec): min=142, max=293, avg=171.83, stdev=13.66 00:10:50.263 lat (usec): min=155, max=304, avg=185.11, stdev=14.10 00:10:50.263 clat percentiles (usec): 00:10:50.263 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:10:50.263 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:50.263 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:10:50.263 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 227], 99.95th=[ 237], 00:10:50.263 | 99.99th=[ 293] 00:10:50.263 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:50.263 slat (nsec): min=14113, max=85117, avg=20228.23, stdev=3610.71 00:10:50.263 clat (usec): min=95, max=328, avg=132.41, stdev=12.57 00:10:50.263 lat (usec): min=113, max=346, avg=152.64, stdev=13.11 00:10:50.263 clat percentiles (usec): 00:10:50.263 | 1.00th=[ 109], 5.00th=[ 116], 10.00th=[ 119], 20.00th=[ 123], 00:10:50.263 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:10:50.263 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:10:50.263 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 190], 99.95th=[ 237], 00:10:50.263 | 99.99th=[ 330] 00:10:50.263 bw ( KiB/s): min=12288, max=12288, per=25.42%, avg=12288.00, stdev= 0.00, samples=1 00:10:50.263 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:50.263 lat (usec) : 100=0.03%, 250=99.93%, 500=0.03% 00:10:50.263 cpu : usr=2.40%, sys=7.90%, ctx=5886, majf=0, minf=9 00:10:50.263 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 issued rwts: total=2814,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.263 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.263 job2: (groupid=0, jobs=1): err= 0: pid=78288: Sat Dec 7 22:42:04 2024 00:10:50.263 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:50.263 slat (nsec): min=11759, max=41767, avg=13851.28, stdev=2910.92 00:10:50.263 clat (usec): min=150, max=1540, avg=185.07, stdev=30.39 00:10:50.263 lat (usec): min=163, max=1554, avg=198.93, stdev=30.46 00:10:50.263 clat percentiles (usec): 00:10:50.263 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:10:50.263 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:10:50.263 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 212], 00:10:50.263 | 99.00th=[ 227], 99.50th=[ 237], 99.90th=[ 249], 99.95th=[ 255], 00:10:50.263 | 99.99th=[ 1549] 00:10:50.263 write: IOPS=2966, BW=11.6MiB/s (12.1MB/s)(11.6MiB/1001msec); 0 zone resets 00:10:50.263 slat (usec): min=18, max=100, avg=21.21, stdev= 4.58 00:10:50.263 clat (usec): min=109, max=2277, avg=140.79, stdev=42.82 00:10:50.263 lat (usec): min=130, max=2296, avg=162.00, stdev=43.00 00:10:50.263 clat percentiles (usec): 00:10:50.263 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 131], 00:10:50.263 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:10:50.263 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:10:50.263 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 562], 99.95th=[ 676], 00:10:50.263 | 99.99th=[ 2278] 00:10:50.263 bw ( KiB/s): min=12288, max=12288, per=25.42%, avg=12288.00, stdev= 0.00, samples=1 00:10:50.263 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:50.263 lat (usec) : 250=99.89%, 500=0.04%, 750=0.04% 00:10:50.263 lat (msec) : 2=0.02%, 4=0.02% 00:10:50.263 cpu : usr=2.10%, sys=8.10%, ctx=5530, majf=0, minf=11 00:10:50.263 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 issued rwts: total=2560,2969,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.263 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.263 job3: (groupid=0, jobs=1): err= 0: pid=78289: Sat Dec 7 22:42:04 2024 00:10:50.263 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:50.263 slat (nsec): min=12114, max=42696, avg=14848.74, stdev=3085.29 00:10:50.263 clat (usec): min=150, max=1862, avg=183.77, stdev=38.45 00:10:50.263 lat (usec): min=163, max=1879, avg=198.62, stdev=38.69 00:10:50.263 clat percentiles (usec): 00:10:50.263 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:10:50.263 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:50.263 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 208], 00:10:50.263 | 99.00th=[ 225], 99.50th=[ 247], 99.90th=[ 453], 99.95th=[ 619], 00:10:50.263 | 99.99th=[ 1860] 00:10:50.263 write: IOPS=2980, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1001msec); 0 zone resets 00:10:50.263 slat (nsec): min=15410, max=76573, avg=22303.31, stdev=4677.06 00:10:50.263 clat (usec): min=104, max=440, avg=138.88, stdev=16.14 00:10:50.263 lat (usec): min=124, max=460, avg=161.18, stdev=17.03 00:10:50.263 clat percentiles (usec): 00:10:50.263 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 129], 00:10:50.263 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:10:50.263 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 153], 95.00th=[ 161], 00:10:50.263 | 99.00th=[ 180], 99.50th=[ 215], 99.90th=[ 330], 99.95th=[ 359], 00:10:50.263 | 99.99th=[ 441] 00:10:50.263 bw ( KiB/s): min=12288, max=12288, per=25.42%, avg=12288.00, stdev= 0.00, samples=1 00:10:50.263 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:50.263 lat (usec) : 250=99.60%, 500=0.36%, 750=0.02% 00:10:50.263 lat (msec) : 2=0.02% 00:10:50.263 cpu : usr=2.30%, sys=8.30%, ctx=5545, majf=0, minf=11 00:10:50.263 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:50.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.263 issued rwts: total=2560,2983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.263 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:50.263 00:10:50.263 Run status group 0 (all jobs): 00:10:50.263 READ: bw=42.1MiB/s (44.1MB/s), 9.99MiB/s-11.1MiB/s (10.5MB/s-11.7MB/s), io=42.1MiB (44.2MB), run=1001-1001msec 00:10:50.263 WRITE: bw=47.2MiB/s (49.5MB/s), 11.6MiB/s-12.0MiB/s (12.1MB/s-12.6MB/s), io=47.2MiB (49.5MB), run=1001-1001msec 00:10:50.263 00:10:50.263 Disk stats (read/write): 00:10:50.263 nvme0n1: ios=2562/2560, merge=0/0, ticks=471/358, in_queue=829, util=87.68% 00:10:50.263 nvme0n2: ios=2515/2560, merge=0/0, ticks=494/368, in_queue=862, util=88.71% 00:10:50.263 nvme0n3: ios=2197/2560, merge=0/0, ticks=406/390, in_queue=796, util=89.23% 00:10:50.263 nvme0n4: ios=2200/2560, merge=0/0, ticks=414/379, in_queue=793, util=89.79% 00:10:50.263 22:42:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:50.263 [global] 00:10:50.263 thread=1 00:10:50.263 invalidate=1 00:10:50.263 rw=write 00:10:50.263 time_based=1 00:10:50.263 runtime=1 00:10:50.263 ioengine=libaio 00:10:50.263 direct=1 00:10:50.263 bs=4096 00:10:50.263 iodepth=128 00:10:50.263 norandommap=0 00:10:50.263 numjobs=1 00:10:50.263 00:10:50.263 verify_dump=1 00:10:50.263 verify_backlog=512 00:10:50.263 verify_state_save=0 00:10:50.263 do_verify=1 00:10:50.263 verify=crc32c-intel 00:10:50.263 [job0] 00:10:50.263 filename=/dev/nvme0n1 00:10:50.263 [job1] 00:10:50.263 filename=/dev/nvme0n2 00:10:50.263 [job2] 00:10:50.263 filename=/dev/nvme0n3 00:10:50.263 [job3] 00:10:50.263 filename=/dev/nvme0n4 00:10:50.263 Could not set queue depth (nvme0n1) 00:10:50.263 Could not set queue depth (nvme0n2) 00:10:50.263 Could not set queue depth (nvme0n3) 00:10:50.263 Could not set queue depth (nvme0n4) 00:10:50.263 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.263 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.263 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.263 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:50.263 fio-3.35 00:10:50.263 Starting 4 threads 00:10:51.649 00:10:51.649 job0: (groupid=0, jobs=1): err= 0: pid=78344: Sat Dec 7 22:42:06 2024 00:10:51.649 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:51.649 slat (usec): min=5, max=3211, avg=100.74, stdev=475.91 00:10:51.649 clat (usec): min=9806, max=14932, avg=13537.50, stdev=623.62 00:10:51.649 lat (usec): min=11464, max=14946, avg=13638.24, stdev=412.23 00:10:51.649 clat percentiles (usec): 00:10:51.649 | 1.00th=[10814], 5.00th=[12780], 10.00th=[13042], 20.00th=[13173], 00:10:51.649 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13566], 60.00th=[13698], 00:10:51.649 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14091], 95.00th=[14222], 00:10:51.649 | 99.00th=[14746], 99.50th=[14877], 99.90th=[14877], 99.95th=[14877], 00:10:51.649 | 99.99th=[14877] 00:10:51.649 write: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec); 0 zone resets 00:10:51.649 slat (usec): min=11, max=4220, avg=98.50, stdev=415.58 00:10:51.649 clat (usec): min=205, max=14892, avg=12777.50, stdev=1191.98 00:10:51.649 lat (usec): min=2454, max=14912, avg=12876.01, stdev=1117.48 00:10:51.649 clat percentiles (usec): 00:10:51.649 | 1.00th=[ 6259], 5.00th=[11600], 10.00th=[12125], 20.00th=[12518], 00:10:51.649 | 30.00th=[12780], 40.00th=[12911], 50.00th=[12911], 60.00th=[13042], 00:10:51.649 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13698], 00:10:51.649 | 99.00th=[14746], 99.50th=[14877], 99.90th=[14877], 99.95th=[14877], 00:10:51.649 | 99.99th=[14877] 00:10:51.649 bw ( KiB/s): min=18696, max=20521, per=26.06%, avg=19608.50, stdev=1290.47, samples=2 00:10:51.649 iops : min= 4674, max= 5130, avg=4902.00, stdev=322.44, samples=2 00:10:51.649 lat (usec) : 250=0.01% 00:10:51.649 lat (msec) : 4=0.33%, 10=0.81%, 20=98.85% 00:10:51.649 cpu : usr=3.90%, sys=15.38%, ctx=303, majf=0, minf=1 00:10:51.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:51.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.649 issued rwts: total=4608,5025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.649 job1: (groupid=0, jobs=1): err= 0: pid=78345: Sat Dec 7 22:42:06 2024 00:10:51.649 read: IOPS=4694, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec) 00:10:51.649 slat (usec): min=4, max=6124, avg=102.76, stdev=497.56 00:10:51.649 clat (usec): min=773, max=19064, avg=13164.05, stdev=1644.44 00:10:51.649 lat (usec): min=2203, max=22863, avg=13266.81, stdev=1670.58 00:10:51.649 clat percentiles (usec): 00:10:51.649 | 1.00th=[ 6783], 5.00th=[10683], 10.00th=[11731], 20.00th=[12649], 00:10:51.649 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:10:51.649 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[15795], 00:10:51.649 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18744], 99.95th=[18744], 00:10:51.649 | 99.99th=[19006] 00:10:51.649 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:51.649 slat (usec): min=12, max=5376, avg=93.24, stdev=490.80 00:10:51.649 clat (usec): min=5383, max=19149, avg=12654.35, stdev=1382.23 00:10:51.649 lat (usec): min=5409, max=19432, avg=12747.59, stdev=1452.80 00:10:51.649 clat percentiles (usec): 00:10:51.649 | 1.00th=[ 8848], 5.00th=[10683], 10.00th=[11469], 20.00th=[11863], 00:10:51.649 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:10:51.649 | 70.00th=[12911], 80.00th=[13566], 90.00th=[13960], 95.00th=[15008], 00:10:51.649 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:10:51.649 | 99.99th=[19268] 00:10:51.649 bw ( KiB/s): min=20232, max=20480, per=27.05%, avg=20356.00, stdev=175.36, samples=2 00:10:51.649 iops : min= 5058, max= 5120, avg=5089.00, stdev=43.84, samples=2 00:10:51.649 lat (usec) : 1000=0.01% 00:10:51.649 lat (msec) : 4=0.17%, 10=2.70%, 20=97.12% 00:10:51.649 cpu : usr=4.00%, sys=14.39%, ctx=381, majf=0, minf=1 00:10:51.649 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:51.649 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.649 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.649 issued rwts: total=4704,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.649 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.650 job2: (groupid=0, jobs=1): err= 0: pid=78346: Sat Dec 7 22:42:06 2024 00:10:51.650 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:51.650 slat (usec): min=6, max=3935, avg=115.24, stdev=549.54 00:10:51.650 clat (usec): min=11278, max=17547, avg=15417.72, stdev=754.24 00:10:51.650 lat (usec): min=14046, max=17573, avg=15532.96, stdev=528.34 00:10:51.650 clat percentiles (usec): 00:10:51.650 | 1.00th=[12125], 5.00th=[14484], 10.00th=[14746], 20.00th=[15008], 00:10:51.650 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15533], 60.00th=[15664], 00:10:51.650 | 70.00th=[15795], 80.00th=[15926], 90.00th=[16057], 95.00th=[16450], 00:10:51.650 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:10:51.650 | 99.99th=[17433] 00:10:51.650 write: IOPS=4299, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1003msec); 0 zone resets 00:10:51.650 slat (usec): min=14, max=3496, avg=113.08, stdev=483.19 00:10:51.650 clat (usec): min=2004, max=17127, avg=14683.61, stdev=1440.81 00:10:51.650 lat (usec): min=2026, max=17151, avg=14796.69, stdev=1360.82 00:10:51.650 clat percentiles (usec): 00:10:51.650 | 1.00th=[ 6325], 5.00th=[12780], 10.00th=[14222], 20.00th=[14353], 00:10:51.650 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:10:51.650 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15533], 95.00th=[15664], 00:10:51.650 | 99.00th=[16450], 99.50th=[16909], 99.90th=[16909], 99.95th=[17171], 00:10:51.650 | 99.99th=[17171] 00:10:51.650 bw ( KiB/s): min=16416, max=17096, per=22.27%, avg=16756.00, stdev=480.83, samples=2 00:10:51.650 iops : min= 4104, max= 4274, avg=4189.00, stdev=120.21, samples=2 00:10:51.650 lat (msec) : 4=0.29%, 10=0.57%, 20=99.14% 00:10:51.650 cpu : usr=4.89%, sys=13.37%, ctx=264, majf=0, minf=8 00:10:51.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:51.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.650 issued rwts: total=4096,4312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.650 job3: (groupid=0, jobs=1): err= 0: pid=78347: Sat Dec 7 22:42:06 2024 00:10:51.650 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:51.650 slat (usec): min=5, max=4349, avg=113.97, stdev=545.78 00:10:51.650 clat (usec): min=10852, max=16738, avg=15188.02, stdev=762.42 00:10:51.650 lat (usec): min=13724, max=16754, avg=15301.98, stdev=542.80 00:10:51.650 clat percentiles (usec): 00:10:51.650 | 1.00th=[11994], 5.00th=[14222], 10.00th=[14484], 20.00th=[14746], 00:10:51.650 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15270], 60.00th=[15401], 00:10:51.650 | 70.00th=[15533], 80.00th=[15795], 90.00th=[15926], 95.00th=[16188], 00:10:51.650 | 99.00th=[16450], 99.50th=[16581], 99.90th=[16712], 99.95th=[16712], 00:10:51.650 | 99.99th=[16712] 00:10:51.650 write: IOPS=4396, BW=17.2MiB/s (18.0MB/s)(17.2MiB/1003msec); 0 zone resets 00:10:51.650 slat (usec): min=11, max=3527, avg=112.11, stdev=488.16 00:10:51.650 clat (usec): min=2151, max=17248, avg=14572.52, stdev=1437.70 00:10:51.650 lat (usec): min=2173, max=17265, avg=14684.64, stdev=1356.89 00:10:51.650 clat percentiles (usec): 00:10:51.650 | 1.00th=[ 6194], 5.00th=[12780], 10.00th=[13960], 20.00th=[14353], 00:10:51.650 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14877], 60.00th=[15008], 00:10:51.650 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15401], 95.00th=[15533], 00:10:51.650 | 99.00th=[16188], 99.50th=[16319], 99.90th=[17171], 99.95th=[17171], 00:10:51.650 | 99.99th=[17171] 00:10:51.650 bw ( KiB/s): min=16424, max=17872, per=22.79%, avg=17148.00, stdev=1023.89, samples=2 00:10:51.650 iops : min= 4106, max= 4468, avg=4287.00, stdev=255.97, samples=2 00:10:51.650 lat (msec) : 4=0.31%, 10=0.72%, 20=98.98% 00:10:51.650 cpu : usr=4.49%, sys=13.07%, ctx=267, majf=0, minf=3 00:10:51.650 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:51.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.650 issued rwts: total=4096,4410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.650 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.650 00:10:51.650 Run status group 0 (all jobs): 00:10:51.650 READ: bw=68.2MiB/s (71.5MB/s), 16.0MiB/s-18.3MiB/s (16.7MB/s-19.2MB/s), io=68.4MiB (71.7MB), run=1002-1003msec 00:10:51.650 WRITE: bw=73.5MiB/s (77.0MB/s), 16.8MiB/s-20.0MiB/s (17.6MB/s-20.9MB/s), io=73.7MiB (77.3MB), run=1002-1003msec 00:10:51.650 00:10:51.650 Disk stats (read/write): 00:10:51.650 nvme0n1: ios=4145/4160, merge=0/0, ticks=12364/11214, in_queue=23578, util=87.78% 00:10:51.650 nvme0n2: ios=4119/4351, merge=0/0, ticks=26259/22982, in_queue=49241, util=88.01% 00:10:51.650 nvme0n3: ios=3584/3616, merge=0/0, ticks=12812/11513, in_queue=24325, util=89.14% 00:10:51.650 nvme0n4: ios=3584/3680, merge=0/0, ticks=12141/11778, in_queue=23919, util=89.70% 00:10:51.650 22:42:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:51.650 [global] 00:10:51.650 thread=1 00:10:51.650 invalidate=1 00:10:51.650 rw=randwrite 00:10:51.650 time_based=1 00:10:51.650 runtime=1 00:10:51.650 ioengine=libaio 00:10:51.650 direct=1 00:10:51.650 bs=4096 00:10:51.650 iodepth=128 00:10:51.650 norandommap=0 00:10:51.650 numjobs=1 00:10:51.650 00:10:51.650 verify_dump=1 00:10:51.650 verify_backlog=512 00:10:51.650 verify_state_save=0 00:10:51.650 do_verify=1 00:10:51.650 verify=crc32c-intel 00:10:51.650 [job0] 00:10:51.650 filename=/dev/nvme0n1 00:10:51.650 [job1] 00:10:51.650 filename=/dev/nvme0n2 00:10:51.650 [job2] 00:10:51.650 filename=/dev/nvme0n3 00:10:51.650 [job3] 00:10:51.650 filename=/dev/nvme0n4 00:10:51.650 Could not set queue depth (nvme0n1) 00:10:51.650 Could not set queue depth (nvme0n2) 00:10:51.650 Could not set queue depth (nvme0n3) 00:10:51.650 Could not set queue depth (nvme0n4) 00:10:51.650 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.650 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.650 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.650 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.650 fio-3.35 00:10:51.650 Starting 4 threads 00:10:53.026 00:10:53.026 job0: (groupid=0, jobs=1): err= 0: pid=78400: Sat Dec 7 22:42:07 2024 00:10:53.026 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:10:53.026 slat (usec): min=3, max=8359, avg=212.29, stdev=826.72 00:10:53.026 clat (usec): min=10619, max=36507, avg=26769.53, stdev=3085.91 00:10:53.026 lat (usec): min=10631, max=36534, avg=26981.82, stdev=3110.08 00:10:53.026 clat percentiles (usec): 00:10:53.026 | 1.00th=[18744], 5.00th=[20579], 10.00th=[22938], 20.00th=[24773], 00:10:53.026 | 30.00th=[25560], 40.00th=[26346], 50.00th=[26870], 60.00th=[27395], 00:10:53.026 | 70.00th=[28181], 80.00th=[29230], 90.00th=[30802], 95.00th=[31589], 00:10:53.026 | 99.00th=[33162], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:10:53.026 | 99.99th=[36439] 00:10:53.026 write: IOPS=2585, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1005msec); 0 zone resets 00:10:53.026 slat (usec): min=11, max=6398, avg=167.86, stdev=646.26 00:10:53.026 clat (usec): min=3297, max=31217, avg=22266.61, stdev=3434.32 00:10:53.026 lat (usec): min=7658, max=31411, avg=22434.48, stdev=3411.68 00:10:53.026 clat percentiles (usec): 00:10:53.026 | 1.00th=[10159], 5.00th=[16319], 10.00th=[17957], 20.00th=[19792], 00:10:53.026 | 30.00th=[20317], 40.00th=[21627], 50.00th=[22938], 60.00th=[23725], 00:10:53.026 | 70.00th=[24249], 80.00th=[24773], 90.00th=[25822], 95.00th=[27657], 00:10:53.026 | 99.00th=[29230], 99.50th=[30016], 99.90th=[30278], 99.95th=[30540], 00:10:53.026 | 99.99th=[31327] 00:10:53.026 bw ( KiB/s): min=11944, max=11944, per=18.90%, avg=11944.00, stdev= 0.00, samples=1 00:10:53.026 iops : min= 2986, max= 2986, avg=2986.00, stdev= 0.00, samples=1 00:10:53.026 lat (msec) : 4=0.02%, 10=0.37%, 20=12.95%, 50=86.66% 00:10:53.026 cpu : usr=1.99%, sys=8.17%, ctx=803, majf=0, minf=19 00:10:53.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:53.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.026 issued rwts: total=2560,2598,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.026 job1: (groupid=0, jobs=1): err= 0: pid=78402: Sat Dec 7 22:42:07 2024 00:10:53.026 read: IOPS=5436, BW=21.2MiB/s (22.3MB/s)(21.3MiB/1001msec) 00:10:53.026 slat (usec): min=7, max=3108, avg=87.56, stdev=408.78 00:10:53.026 clat (usec): min=381, max=12916, avg=11671.24, stdev=1039.31 00:10:53.026 lat (usec): min=413, max=12938, avg=11758.80, stdev=957.78 00:10:53.026 clat percentiles (usec): 00:10:53.026 | 1.00th=[ 5669], 5.00th=[10945], 10.00th=[11338], 20.00th=[11469], 00:10:53.026 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:10:53.026 | 70.00th=[12125], 80.00th=[12125], 90.00th=[12256], 95.00th=[12387], 00:10:53.026 | 99.00th=[12518], 99.50th=[12649], 99.90th=[12911], 99.95th=[12911], 00:10:53.026 | 99.99th=[12911] 00:10:53.026 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:10:53.026 slat (usec): min=11, max=2721, avg=85.00, stdev=352.74 00:10:53.026 clat (usec): min=8472, max=12429, avg=11176.58, stdev=450.47 00:10:53.026 lat (usec): min=9528, max=12455, avg=11261.58, stdev=281.82 00:10:53.026 clat percentiles (usec): 00:10:53.026 | 1.00th=[ 9110], 5.00th=[10683], 10.00th=[10814], 20.00th=[10945], 00:10:53.026 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:10:53.026 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11600], 00:10:53.026 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12387], 99.95th=[12387], 00:10:53.026 | 99.99th=[12387] 00:10:53.026 bw ( KiB/s): min=22792, max=22792, per=36.06%, avg=22792.00, stdev= 0.00, samples=1 00:10:53.026 iops : min= 5698, max= 5698, avg=5698.00, stdev= 0.00, samples=1 00:10:53.026 lat (usec) : 500=0.02% 00:10:53.026 lat (msec) : 4=0.29%, 10=3.52%, 20=96.17% 00:10:53.026 cpu : usr=6.00%, sys=14.20%, ctx=348, majf=0, minf=5 00:10:53.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:53.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.026 issued rwts: total=5442,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.026 job2: (groupid=0, jobs=1): err= 0: pid=78407: Sat Dec 7 22:42:07 2024 00:10:53.026 read: IOPS=4463, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1004msec) 00:10:53.026 slat (usec): min=7, max=5983, avg=104.90, stdev=503.78 00:10:53.027 clat (usec): min=238, max=19450, avg=13495.11, stdev=1391.42 00:10:53.027 lat (usec): min=2903, max=22382, avg=13600.02, stdev=1314.05 00:10:53.027 clat percentiles (usec): 00:10:53.027 | 1.00th=[ 6390], 5.00th=[11600], 10.00th=[12911], 20.00th=[13304], 00:10:53.027 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:10:53.027 | 70.00th=[13829], 80.00th=[13960], 90.00th=[14222], 95.00th=[14353], 00:10:53.027 | 99.00th=[16450], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:10:53.027 | 99.99th=[19530] 00:10:53.027 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:53.027 slat (usec): min=11, max=6553, avg=107.06, stdev=488.99 00:10:53.027 clat (usec): min=10102, max=26201, avg=14253.29, stdev=3642.45 00:10:53.027 lat (usec): min=11906, max=26220, avg=14360.35, stdev=3635.06 00:10:53.027 clat percentiles (usec): 00:10:53.027 | 1.00th=[10552], 5.00th=[12387], 10.00th=[12518], 20.00th=[12649], 00:10:53.027 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:10:53.027 | 70.00th=[13304], 80.00th=[13566], 90.00th=[22938], 95.00th=[24511], 00:10:53.027 | 99.00th=[25822], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:10:53.027 | 99.99th=[26084] 00:10:53.027 bw ( KiB/s): min=16904, max=19960, per=29.17%, avg=18432.00, stdev=2160.92, samples=2 00:10:53.027 iops : min= 4226, max= 4990, avg=4608.00, stdev=540.23, samples=2 00:10:53.027 lat (usec) : 250=0.01% 00:10:53.027 lat (msec) : 4=0.35%, 10=0.68%, 20=93.46%, 50=5.49% 00:10:53.027 cpu : usr=3.99%, sys=14.16%, ctx=286, majf=0, minf=7 00:10:53.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:53.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.027 issued rwts: total=4481,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.027 job3: (groupid=0, jobs=1): err= 0: pid=78408: Sat Dec 7 22:42:07 2024 00:10:53.027 read: IOPS=2584, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1007msec) 00:10:53.027 slat (usec): min=3, max=8156, avg=194.33, stdev=721.91 00:10:53.027 clat (usec): min=3535, max=38424, avg=24044.36, stdev=6287.04 00:10:53.027 lat (usec): min=8018, max=38443, avg=24238.69, stdev=6323.86 00:10:53.027 clat percentiles (usec): 00:10:53.027 | 1.00th=[ 9765], 5.00th=[13173], 10.00th=[14091], 20.00th=[15795], 00:10:53.027 | 30.00th=[21627], 40.00th=[25035], 50.00th=[26346], 60.00th=[27132], 00:10:53.027 | 70.00th=[27919], 80.00th=[29230], 90.00th=[30540], 95.00th=[31589], 00:10:53.027 | 99.00th=[33817], 99.50th=[35390], 99.90th=[35914], 99.95th=[37487], 00:10:53.027 | 99.99th=[38536] 00:10:53.027 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:10:53.027 slat (usec): min=12, max=6298, avg=154.05, stdev=559.86 00:10:53.027 clat (usec): min=7853, max=33842, avg=21064.09, stdev=4610.02 00:10:53.027 lat (usec): min=7881, max=33864, avg=21218.14, stdev=4624.86 00:10:53.027 clat percentiles (usec): 00:10:53.027 | 1.00th=[11994], 5.00th=[12780], 10.00th=[13173], 20.00th=[16909], 00:10:53.027 | 30.00th=[19530], 40.00th=[21103], 50.00th=[22152], 60.00th=[22938], 00:10:53.027 | 70.00th=[23725], 80.00th=[24511], 90.00th=[26346], 95.00th=[27395], 00:10:53.027 | 99.00th=[30540], 99.50th=[32113], 99.90th=[32375], 99.95th=[33817], 00:10:53.027 | 99.99th=[33817] 00:10:53.027 bw ( KiB/s): min=11608, max=12288, per=18.91%, avg=11948.00, stdev=480.83, samples=2 00:10:53.027 iops : min= 2902, max= 3072, avg=2987.00, stdev=120.21, samples=2 00:10:53.027 lat (msec) : 4=0.02%, 10=0.62%, 20=28.33%, 50=71.03% 00:10:53.027 cpu : usr=3.28%, sys=8.15%, ctx=822, majf=0, minf=10 00:10:53.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:53.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:53.027 issued rwts: total=2603,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:53.027 00:10:53.027 Run status group 0 (all jobs): 00:10:53.027 READ: bw=58.5MiB/s (61.4MB/s), 9.95MiB/s-21.2MiB/s (10.4MB/s-22.3MB/s), io=58.9MiB (61.8MB), run=1001-1007msec 00:10:53.027 WRITE: bw=61.7MiB/s (64.7MB/s), 10.1MiB/s-22.0MiB/s (10.6MB/s-23.0MB/s), io=62.1MiB (65.2MB), run=1001-1007msec 00:10:53.027 00:10:53.027 Disk stats (read/write): 00:10:53.027 nvme0n1: ios=2098/2426, merge=0/0, ticks=17619/15043, in_queue=32662, util=88.37% 00:10:53.027 nvme0n2: ios=4653/4896, merge=0/0, ticks=12389/11288, in_queue=23677, util=88.37% 00:10:53.027 nvme0n3: ios=3648/4096, merge=0/0, ticks=11284/12412, in_queue=23696, util=88.96% 00:10:53.027 nvme0n4: ios=2332/2560, merge=0/0, ticks=19551/16742, in_queue=36293, util=89.72% 00:10:53.027 22:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:53.027 22:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=78423 00:10:53.027 22:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:53.027 22:42:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:53.027 [global] 00:10:53.027 thread=1 00:10:53.027 invalidate=1 00:10:53.027 rw=read 00:10:53.027 time_based=1 00:10:53.027 runtime=10 00:10:53.027 ioengine=libaio 00:10:53.027 direct=1 00:10:53.027 bs=4096 00:10:53.027 iodepth=1 00:10:53.027 norandommap=1 00:10:53.027 numjobs=1 00:10:53.027 00:10:53.027 [job0] 00:10:53.027 filename=/dev/nvme0n1 00:10:53.027 [job1] 00:10:53.027 filename=/dev/nvme0n2 00:10:53.027 [job2] 00:10:53.027 filename=/dev/nvme0n3 00:10:53.027 [job3] 00:10:53.027 filename=/dev/nvme0n4 00:10:53.027 Could not set queue depth (nvme0n1) 00:10:53.027 Could not set queue depth (nvme0n2) 00:10:53.027 Could not set queue depth (nvme0n3) 00:10:53.027 Could not set queue depth (nvme0n4) 00:10:53.027 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.027 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.027 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.027 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.027 fio-3.35 00:10:53.027 Starting 4 threads 00:10:56.311 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:56.311 fio: pid=78471, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:56.311 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=51609600, buflen=4096 00:10:56.311 22:42:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:56.570 fio: pid=78470, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:56.570 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=41877504, buflen=4096 00:10:56.570 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.570 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:56.829 fio: pid=78468, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:56.829 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=6393856, buflen=4096 00:10:56.829 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.829 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:57.088 fio: pid=78469, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:57.088 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52301824, buflen=4096 00:10:57.088 00:10:57.088 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78468: Sat Dec 7 22:42:11 2024 00:10:57.088 read: IOPS=5125, BW=20.0MiB/s (21.0MB/s)(70.1MiB/3501msec) 00:10:57.088 slat (usec): min=9, max=13225, avg=16.41, stdev=164.88 00:10:57.088 clat (usec): min=131, max=3663, avg=177.33, stdev=74.21 00:10:57.088 lat (usec): min=143, max=13446, avg=193.75, stdev=182.77 00:10:57.088 clat percentiles (usec): 00:10:57.088 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 157], 00:10:57.088 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:10:57.088 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 198], 95.00th=[ 235], 00:10:57.088 | 99.00th=[ 322], 99.50th=[ 347], 99.90th=[ 840], 99.95th=[ 1713], 00:10:57.088 | 99.99th=[ 3490] 00:10:57.088 bw ( KiB/s): min=20912, max=22224, per=38.22%, avg=21678.67, stdev=556.29, samples=6 00:10:57.088 iops : min= 5228, max= 5556, avg=5419.67, stdev=139.07, samples=6 00:10:57.088 lat (usec) : 250=95.87%, 500=3.95%, 750=0.07%, 1000=0.02% 00:10:57.088 lat (msec) : 2=0.04%, 4=0.04% 00:10:57.088 cpu : usr=1.74%, sys=6.34%, ctx=17956, majf=0, minf=1 00:10:57.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.088 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.088 issued rwts: total=17946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.088 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.088 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78469: Sat Dec 7 22:42:11 2024 00:10:57.088 read: IOPS=3381, BW=13.2MiB/s (13.9MB/s)(49.9MiB/3776msec) 00:10:57.088 slat (usec): min=8, max=9587, avg=19.52, stdev=177.78 00:10:57.088 clat (usec): min=127, max=3731, avg=274.49, stdev=87.43 00:10:57.088 lat (usec): min=141, max=10081, avg=294.01, stdev=200.69 00:10:57.088 clat percentiles (usec): 00:10:57.088 | 1.00th=[ 137], 5.00th=[ 149], 10.00th=[ 169], 20.00th=[ 241], 00:10:57.088 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:10:57.088 | 70.00th=[ 289], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 375], 00:10:57.088 | 99.00th=[ 453], 99.50th=[ 519], 99.90th=[ 988], 99.95th=[ 1336], 00:10:57.088 | 99.99th=[ 3589] 00:10:57.088 bw ( KiB/s): min= 9976, max=15104, per=23.02%, avg=13058.29, stdev=1916.82, samples=7 00:10:57.088 iops : min= 2494, max= 3776, avg=3264.57, stdev=479.20, samples=7 00:10:57.088 lat (usec) : 250=24.72%, 500=74.64%, 750=0.45%, 1000=0.10% 00:10:57.088 lat (msec) : 2=0.07%, 4=0.02% 00:10:57.088 cpu : usr=1.38%, sys=4.66%, ctx=12799, majf=0, minf=2 00:10:57.088 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.089 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.089 issued rwts: total=12770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.089 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78470: Sat Dec 7 22:42:11 2024 00:10:57.089 read: IOPS=3149, BW=12.3MiB/s (12.9MB/s)(39.9MiB/3247msec) 00:10:57.089 slat (usec): min=9, max=11509, avg=19.39, stdev=159.54 00:10:57.089 clat (usec): min=152, max=1953, avg=296.22, stdev=63.78 00:10:57.089 lat (usec): min=166, max=11831, avg=315.61, stdev=174.31 00:10:57.089 clat percentiles (usec): 00:10:57.089 | 1.00th=[ 217], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 260], 00:10:57.089 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:10:57.089 | 70.00th=[ 306], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 383], 00:10:57.089 | 99.00th=[ 515], 99.50th=[ 562], 99.90th=[ 938], 99.95th=[ 1029], 00:10:57.089 | 99.99th=[ 1434] 00:10:57.089 bw ( KiB/s): min=10080, max=14056, per=22.47%, avg=12744.00, stdev=1867.14, samples=6 00:10:57.089 iops : min= 2520, max= 3514, avg=3186.00, stdev=466.78, samples=6 00:10:57.089 lat (usec) : 250=9.00%, 500=89.73%, 750=1.09%, 1000=0.11% 00:10:57.089 lat (msec) : 2=0.07% 00:10:57.089 cpu : usr=1.39%, sys=4.68%, ctx=10229, majf=0, minf=1 00:10:57.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.089 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.089 issued rwts: total=10225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.089 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78471: Sat Dec 7 22:42:11 2024 00:10:57.089 read: IOPS=4252, BW=16.6MiB/s (17.4MB/s)(49.2MiB/2963msec) 00:10:57.089 slat (nsec): min=8220, max=91780, avg=15248.26, stdev=4384.96 00:10:57.089 clat (usec): min=150, max=6026, avg=218.28, stdev=154.28 00:10:57.089 lat (usec): min=164, max=6040, avg=233.53, stdev=155.08 00:10:57.089 clat percentiles (usec): 00:10:57.089 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:10:57.089 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:10:57.089 | 70.00th=[ 190], 80.00th=[ 302], 90.00th=[ 359], 95.00th=[ 375], 00:10:57.089 | 99.00th=[ 412], 99.50th=[ 482], 99.90th=[ 2180], 99.95th=[ 3818], 00:10:57.089 | 99.99th=[ 5932] 00:10:57.089 bw ( KiB/s): min=10736, max=21168, per=32.26%, avg=18296.00, stdev=4283.44, samples=5 00:10:57.089 iops : min= 2684, max= 5292, avg=4574.00, stdev=1070.86, samples=5 00:10:57.089 lat (usec) : 250=79.53%, 500=20.08%, 750=0.25% 00:10:57.089 lat (msec) : 2=0.03%, 4=0.06%, 10=0.05% 00:10:57.089 cpu : usr=1.45%, sys=5.57%, ctx=12605, majf=0, minf=1 00:10:57.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.089 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.089 issued rwts: total=12601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.089 00:10:57.089 Run status group 0 (all jobs): 00:10:57.089 READ: bw=55.4MiB/s (58.1MB/s), 12.3MiB/s-20.0MiB/s (12.9MB/s-21.0MB/s), io=209MiB (219MB), run=2963-3776msec 00:10:57.089 00:10:57.089 Disk stats (read/write): 00:10:57.089 nvme0n1: ios=17364/0, merge=0/0, ticks=3074/0, in_queue=3074, util=95.11% 00:10:57.089 nvme0n2: ios=11849/0, merge=0/0, ticks=3312/0, in_queue=3312, util=95.64% 00:10:57.089 nvme0n3: ios=9827/0, merge=0/0, ticks=2920/0, in_queue=2920, util=96.24% 00:10:57.089 nvme0n4: ios=12359/0, merge=0/0, ticks=2632/0, in_queue=2632, util=96.42% 00:10:57.089 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.089 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:57.347 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.347 22:42:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:57.606 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.606 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:57.865 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.865 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:58.123 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:58.124 22:42:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 78423 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.383 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:58.383 nvmf hotplug test: fio failed as expected 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:58.383 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:58.642 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:58.642 rmmod nvme_tcp 00:10:58.901 rmmod nvme_fabrics 00:10:58.901 rmmod nvme_keyring 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 78041 ']' 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 78041 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 78041 ']' 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 78041 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78041 00:10:58.901 killing process with pid 78041 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78041' 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 78041 00:10:58.901 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 78041 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:58.902 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:10:59.161 ************************************ 00:10:59.161 END TEST nvmf_fio_target 00:10:59.161 ************************************ 00:10:59.161 00:10:59.161 real 0m19.623s 00:10:59.161 user 1m13.505s 00:10:59.161 sys 0m10.561s 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.161 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.421 22:42:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:59.421 22:42:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.421 22:42:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.421 22:42:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:59.421 ************************************ 00:10:59.421 START TEST nvmf_bdevio 00:10:59.421 ************************************ 00:10:59.421 22:42:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:59.421 * Looking for test storage... 00:10:59.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:59.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.421 --rc genhtml_branch_coverage=1 00:10:59.421 --rc genhtml_function_coverage=1 00:10:59.421 --rc genhtml_legend=1 00:10:59.421 --rc geninfo_all_blocks=1 00:10:59.421 --rc geninfo_unexecuted_blocks=1 00:10:59.421 00:10:59.421 ' 00:10:59.421 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:59.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.421 --rc genhtml_branch_coverage=1 00:10:59.421 --rc genhtml_function_coverage=1 00:10:59.421 --rc genhtml_legend=1 00:10:59.421 --rc geninfo_all_blocks=1 00:10:59.422 --rc geninfo_unexecuted_blocks=1 00:10:59.422 00:10:59.422 ' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:59.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.422 --rc genhtml_branch_coverage=1 00:10:59.422 --rc genhtml_function_coverage=1 00:10:59.422 --rc genhtml_legend=1 00:10:59.422 --rc geninfo_all_blocks=1 00:10:59.422 --rc geninfo_unexecuted_blocks=1 00:10:59.422 00:10:59.422 ' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:59.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.422 --rc genhtml_branch_coverage=1 00:10:59.422 --rc genhtml_function_coverage=1 00:10:59.422 --rc genhtml_legend=1 00:10:59.422 --rc geninfo_all_blocks=1 00:10:59.422 --rc geninfo_unexecuted_blocks=1 00:10:59.422 00:10:59.422 ' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:59.422 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:59.422 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:59.423 Cannot find device "nvmf_init_br" 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:59.423 Cannot find device "nvmf_init_br2" 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:59.423 Cannot find device "nvmf_tgt_br" 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:10:59.423 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:59.681 Cannot find device "nvmf_tgt_br2" 00:10:59.681 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:59.682 Cannot find device "nvmf_init_br" 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:59.682 Cannot find device "nvmf_init_br2" 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:59.682 Cannot find device "nvmf_tgt_br" 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:59.682 Cannot find device "nvmf_tgt_br2" 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:59.682 Cannot find device "nvmf_br" 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:59.682 Cannot find device "nvmf_init_if" 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:59.682 Cannot find device "nvmf_init_if2" 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:59.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:59.682 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:59.682 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:59.941 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:59.941 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:10:59.941 00:10:59.941 --- 10.0.0.3 ping statistics --- 00:10:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.941 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:59.941 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:59.941 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:59.941 00:10:59.941 --- 10.0.0.4 ping statistics --- 00:10:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.941 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:59.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:10:59.941 00:10:59.941 --- 10.0.0.1 ping statistics --- 00:10:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.941 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:59.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:10:59.941 00:10:59.941 --- 10.0.0.2 ping statistics --- 00:10:59.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.941 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=78792 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 78792 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 78792 ']' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:59.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:59.941 22:42:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.941 [2024-12-07 22:42:14.613596] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:10:59.942 [2024-12-07 22:42:14.613710] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.200 [2024-12-07 22:42:14.755021] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.201 [2024-12-07 22:42:14.796793] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.201 [2024-12-07 22:42:14.796865] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.201 [2024-12-07 22:42:14.796911] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.201 [2024-12-07 22:42:14.796922] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.201 [2024-12-07 22:42:14.796931] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.201 [2024-12-07 22:42:14.797103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.201 [2024-12-07 22:42:14.797259] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:00.201 [2024-12-07 22:42:14.797898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.201 [2024-12-07 22:42:14.797924] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:00.201 [2024-12-07 22:42:14.830934] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 [2024-12-07 22:42:15.635753] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 Malloc0 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.136 [2024-12-07 22:42:15.682042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:01.136 { 00:11:01.136 "params": { 00:11:01.136 "name": "Nvme$subsystem", 00:11:01.136 "trtype": "$TEST_TRANSPORT", 00:11:01.136 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:01.136 "adrfam": "ipv4", 00:11:01.136 "trsvcid": "$NVMF_PORT", 00:11:01.136 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:01.136 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:01.136 "hdgst": ${hdgst:-false}, 00:11:01.136 "ddgst": ${ddgst:-false} 00:11:01.136 }, 00:11:01.136 "method": "bdev_nvme_attach_controller" 00:11:01.136 } 00:11:01.136 EOF 00:11:01.136 )") 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:01.136 22:42:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:01.136 "params": { 00:11:01.136 "name": "Nvme1", 00:11:01.136 "trtype": "tcp", 00:11:01.136 "traddr": "10.0.0.3", 00:11:01.136 "adrfam": "ipv4", 00:11:01.136 "trsvcid": "4420", 00:11:01.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:01.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:01.136 "hdgst": false, 00:11:01.136 "ddgst": false 00:11:01.136 }, 00:11:01.136 "method": "bdev_nvme_attach_controller" 00:11:01.136 }' 00:11:01.136 [2024-12-07 22:42:15.742252] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:11:01.136 [2024-12-07 22:42:15.742341] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78828 ] 00:11:01.136 [2024-12-07 22:42:15.882543] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.396 [2024-12-07 22:42:15.927212] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.396 [2024-12-07 22:42:15.927367] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.396 [2024-12-07 22:42:15.927373] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.396 [2024-12-07 22:42:15.969004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:01.396 I/O targets: 00:11:01.396 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:01.396 00:11:01.396 00:11:01.396 CUnit - A unit testing framework for C - Version 2.1-3 00:11:01.396 http://cunit.sourceforge.net/ 00:11:01.396 00:11:01.396 00:11:01.396 Suite: bdevio tests on: Nvme1n1 00:11:01.396 Test: blockdev write read block ...passed 00:11:01.396 Test: blockdev write zeroes read block ...passed 00:11:01.396 Test: blockdev write zeroes read no split ...passed 00:11:01.396 Test: blockdev write zeroes read split ...passed 00:11:01.396 Test: blockdev write zeroes read split partial ...passed 00:11:01.396 Test: blockdev reset ...[2024-12-07 22:42:16.101356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:01.396 [2024-12-07 22:42:16.101470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf260d0 (9): Bad file descriptor 00:11:01.396 [2024-12-07 22:42:16.118455] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:01.396 passed 00:11:01.396 Test: blockdev write read 8 blocks ...passed 00:11:01.396 Test: blockdev write read size > 128k ...passed 00:11:01.396 Test: blockdev write read invalid size ...passed 00:11:01.396 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:01.396 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:01.396 Test: blockdev write read max offset ...passed 00:11:01.396 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:01.396 Test: blockdev writev readv 8 blocks ...passed 00:11:01.396 Test: blockdev writev readv 30 x 1block ...passed 00:11:01.396 Test: blockdev writev readv block ...passed 00:11:01.396 Test: blockdev writev readv size > 128k ...passed 00:11:01.396 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:01.396 Test: blockdev comparev and writev ...[2024-12-07 22:42:16.126230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.126289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.126319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.126333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.126747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.126784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.126816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.126830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.127170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.127208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.127230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.127243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.127575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.127613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.127634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.396 [2024-12-07 22:42:16.127646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:01.396 passed 00:11:01.396 Test: blockdev nvme passthru rw ...passed 00:11:01.396 Test: blockdev nvme passthru vendor specific ...[2024-12-07 22:42:16.128653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.396 [2024-12-07 22:42:16.128684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.128807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.396 [2024-12-07 22:42:16.128826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.128959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.396 [2024-12-07 22:42:16.128980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:01.396 [2024-12-07 22:42:16.129106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:01.396 [2024-12-07 22:42:16.129139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:01.396 passed 00:11:01.396 Test: blockdev nvme admin passthru ...passed 00:11:01.396 Test: blockdev copy ...passed 00:11:01.396 00:11:01.396 Run Summary: Type Total Ran Passed Failed Inactive 00:11:01.396 suites 1 1 n/a 0 0 00:11:01.396 tests 23 23 23 0 0 00:11:01.396 asserts 152 152 152 0 n/a 00:11:01.396 00:11:01.396 Elapsed time = 0.147 seconds 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:01.655 rmmod nvme_tcp 00:11:01.655 rmmod nvme_fabrics 00:11:01.655 rmmod nvme_keyring 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 78792 ']' 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 78792 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 78792 ']' 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 78792 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.655 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78792 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:01.914 killing process with pid 78792 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78792' 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 78792 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 78792 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:01.914 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:02.172 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:02.172 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:02.173 00:11:02.173 real 0m2.897s 00:11:02.173 user 0m8.390s 00:11:02.173 sys 0m0.778s 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.173 ************************************ 00:11:02.173 END TEST nvmf_bdevio 00:11:02.173 ************************************ 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:02.173 ************************************ 00:11:02.173 END TEST nvmf_target_core 00:11:02.173 ************************************ 00:11:02.173 00:11:02.173 real 2m29.806s 00:11:02.173 user 6m29.557s 00:11:02.173 sys 0m52.985s 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.173 22:42:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:02.173 22:42:16 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.173 22:42:16 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.173 22:42:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.173 ************************************ 00:11:02.173 START TEST nvmf_target_extra 00:11:02.173 ************************************ 00:11:02.173 22:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:02.432 * Looking for test storage... 00:11:02.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.432 --rc genhtml_branch_coverage=1 00:11:02.432 --rc genhtml_function_coverage=1 00:11:02.432 --rc genhtml_legend=1 00:11:02.432 --rc geninfo_all_blocks=1 00:11:02.432 --rc geninfo_unexecuted_blocks=1 00:11:02.432 00:11:02.432 ' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.432 --rc genhtml_branch_coverage=1 00:11:02.432 --rc genhtml_function_coverage=1 00:11:02.432 --rc genhtml_legend=1 00:11:02.432 --rc geninfo_all_blocks=1 00:11:02.432 --rc geninfo_unexecuted_blocks=1 00:11:02.432 00:11:02.432 ' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.432 --rc genhtml_branch_coverage=1 00:11:02.432 --rc genhtml_function_coverage=1 00:11:02.432 --rc genhtml_legend=1 00:11:02.432 --rc geninfo_all_blocks=1 00:11:02.432 --rc geninfo_unexecuted_blocks=1 00:11:02.432 00:11:02.432 ' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:02.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.432 --rc genhtml_branch_coverage=1 00:11:02.432 --rc genhtml_function_coverage=1 00:11:02.432 --rc genhtml_legend=1 00:11:02.432 --rc geninfo_all_blocks=1 00:11:02.432 --rc geninfo_unexecuted_blocks=1 00:11:02.432 00:11:02.432 ' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.432 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.432 ************************************ 00:11:02.432 START TEST nvmf_auth_target 00:11:02.432 ************************************ 00:11:02.432 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:02.692 * Looking for test storage... 00:11:02.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.692 --rc genhtml_branch_coverage=1 00:11:02.692 --rc genhtml_function_coverage=1 00:11:02.692 --rc genhtml_legend=1 00:11:02.692 --rc geninfo_all_blocks=1 00:11:02.692 --rc geninfo_unexecuted_blocks=1 00:11:02.692 00:11:02.692 ' 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.692 --rc genhtml_branch_coverage=1 00:11:02.692 --rc genhtml_function_coverage=1 00:11:02.692 --rc genhtml_legend=1 00:11:02.692 --rc geninfo_all_blocks=1 00:11:02.692 --rc geninfo_unexecuted_blocks=1 00:11:02.692 00:11:02.692 ' 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.692 --rc genhtml_branch_coverage=1 00:11:02.692 --rc genhtml_function_coverage=1 00:11:02.692 --rc genhtml_legend=1 00:11:02.692 --rc geninfo_all_blocks=1 00:11:02.692 --rc geninfo_unexecuted_blocks=1 00:11:02.692 00:11:02.692 ' 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:02.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.692 --rc genhtml_branch_coverage=1 00:11:02.692 --rc genhtml_function_coverage=1 00:11:02.692 --rc genhtml_legend=1 00:11:02.692 --rc geninfo_all_blocks=1 00:11:02.692 --rc geninfo_unexecuted_blocks=1 00:11:02.692 00:11:02.692 ' 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:02.692 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:02.693 Cannot find device "nvmf_init_br" 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:02.693 Cannot find device "nvmf_init_br2" 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:02.693 Cannot find device "nvmf_tgt_br" 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:02.693 Cannot find device "nvmf_tgt_br2" 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:02.693 Cannot find device "nvmf_init_br" 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:02.693 Cannot find device "nvmf_init_br2" 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:02.693 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:02.693 Cannot find device "nvmf_tgt_br" 00:11:02.694 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:02.694 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:02.694 Cannot find device "nvmf_tgt_br2" 00:11:02.694 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:02.694 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:02.952 Cannot find device "nvmf_br" 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:02.952 Cannot find device "nvmf_init_if" 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:02.952 Cannot find device "nvmf_init_if2" 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:02.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:02.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:02.952 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:02.953 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:03.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:03.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:11:03.212 00:11:03.212 --- 10.0.0.3 ping statistics --- 00:11:03.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.212 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:03.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:03.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:11:03.212 00:11:03.212 --- 10.0.0.4 ping statistics --- 00:11:03.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.212 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:03.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:11:03.212 00:11:03.212 --- 10.0.0.1 ping statistics --- 00:11:03.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.212 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:03.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:11:03.212 00:11:03.212 --- 10.0.0.2 ping statistics --- 00:11:03.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.212 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:03.212 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=79110 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 79110 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79110 ']' 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.213 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=79129 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=bc06f9f03a0b9c8e06879b5bf88583f8d30c2418eb3de7fb 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.1N3 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key bc06f9f03a0b9c8e06879b5bf88583f8d30c2418eb3de7fb 0 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 bc06f9f03a0b9c8e06879b5bf88583f8d30c2418eb3de7fb 0 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=bc06f9f03a0b9c8e06879b5bf88583f8d30c2418eb3de7fb 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.1N3 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.1N3 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1N3 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:03.472 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=eb5ea04beabfead6d1ba36967825f0cfa1cc0547bc7bef58a89d4e5e9de2e1a1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.bfY 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key eb5ea04beabfead6d1ba36967825f0cfa1cc0547bc7bef58a89d4e5e9de2e1a1 3 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 eb5ea04beabfead6d1ba36967825f0cfa1cc0547bc7bef58a89d4e5e9de2e1a1 3 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=eb5ea04beabfead6d1ba36967825f0cfa1cc0547bc7bef58a89d4e5e9de2e1a1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.bfY 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.bfY 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bfY 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=27d30f65cf9bd07004f3f4488ba4413e 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.6ZD 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 27d30f65cf9bd07004f3f4488ba4413e 1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 27d30f65cf9bd07004f3f4488ba4413e 1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=27d30f65cf9bd07004f3f4488ba4413e 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.6ZD 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.6ZD 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.6ZD 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=73834d4a06c2a0cfae2ff87626044a0c3eeaa7e5a263496c 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.EpG 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 73834d4a06c2a0cfae2ff87626044a0c3eeaa7e5a263496c 2 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 73834d4a06c2a0cfae2ff87626044a0c3eeaa7e5a263496c 2 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=73834d4a06c2a0cfae2ff87626044a0c3eeaa7e5a263496c 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.EpG 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.EpG 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.EpG 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=4440a76caa78a1456f4b45d47e7c2525e409d3400d361766 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.a0v 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 4440a76caa78a1456f4b45d47e7c2525e409d3400d361766 2 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 4440a76caa78a1456f4b45d47e7c2525e409d3400d361766 2 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=4440a76caa78a1456f4b45d47e7c2525e409d3400d361766 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:03.732 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.a0v 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.a0v 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.a0v 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=f42a2ed7387f57597dc817aa7fe233c8 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.f1q 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key f42a2ed7387f57597dc817aa7fe233c8 1 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 f42a2ed7387f57597dc817aa7fe233c8 1 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=f42a2ed7387f57597dc817aa7fe233c8 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.f1q 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.f1q 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.f1q 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=1aac65173c28335732f73767affa401b4d5fb26a417eb6e8b27e1bfdde1c52df 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.LBw 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 1aac65173c28335732f73767affa401b4d5fb26a417eb6e8b27e1bfdde1c52df 3 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 1aac65173c28335732f73767affa401b4d5fb26a417eb6e8b27e1bfdde1c52df 3 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=1aac65173c28335732f73767affa401b4d5fb26a417eb6e8b27e1bfdde1c52df 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.LBw 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.LBw 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.LBw 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 79110 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79110 ']' 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.992 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.251 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.251 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:04.251 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 79129 /var/tmp/host.sock 00:11:04.251 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79129 ']' 00:11:04.251 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:04.251 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:04.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:04.252 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:04.252 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:04.252 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1N3 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1N3 00:11:04.820 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1N3 00:11:05.079 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bfY ]] 00:11:05.079 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bfY 00:11:05.079 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.079 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.079 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.079 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bfY 00:11:05.079 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bfY 00:11:05.338 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:05.338 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6ZD 00:11:05.338 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.338 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.338 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.338 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.6ZD 00:11:05.338 22:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.6ZD 00:11:05.597 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.EpG ]] 00:11:05.597 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EpG 00:11:05.597 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.597 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.597 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.597 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EpG 00:11:05.597 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EpG 00:11:05.855 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:05.856 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.a0v 00:11:05.856 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.856 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:05.856 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.856 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.a0v 00:11:05.856 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.a0v 00:11:06.115 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.f1q ]] 00:11:06.115 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f1q 00:11:06.115 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.115 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.115 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.115 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f1q 00:11:06.115 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f1q 00:11:06.373 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:06.373 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LBw 00:11:06.373 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.373 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.373 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.373 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LBw 00:11:06.373 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LBw 00:11:06.632 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:06.632 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:06.632 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:06.632 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.632 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:06.632 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:06.891 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:07.150 00:11:07.151 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.151 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.151 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.410 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.410 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.410 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.410 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.410 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.410 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.410 { 00:11:07.410 "cntlid": 1, 00:11:07.410 "qid": 0, 00:11:07.410 "state": "enabled", 00:11:07.410 "thread": "nvmf_tgt_poll_group_000", 00:11:07.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:07.410 "listen_address": { 00:11:07.410 "trtype": "TCP", 00:11:07.410 "adrfam": "IPv4", 00:11:07.410 "traddr": "10.0.0.3", 00:11:07.410 "trsvcid": "4420" 00:11:07.410 }, 00:11:07.410 "peer_address": { 00:11:07.410 "trtype": "TCP", 00:11:07.410 "adrfam": "IPv4", 00:11:07.410 "traddr": "10.0.0.1", 00:11:07.410 "trsvcid": "53048" 00:11:07.410 }, 00:11:07.410 "auth": { 00:11:07.410 "state": "completed", 00:11:07.410 "digest": "sha256", 00:11:07.410 "dhgroup": "null" 00:11:07.410 } 00:11:07.410 } 00:11:07.410 ]' 00:11:07.410 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.670 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:07.670 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.670 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:07.670 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.670 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.670 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.670 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:07.929 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:07.929 22:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:12.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.147 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:12.405 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:12.405 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:12.405 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:12.405 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:12.405 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:12.405 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:12.406 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.406 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.406 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.406 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.406 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.406 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.406 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:12.697 00:11:12.697 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.697 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.697 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.954 { 00:11:12.954 "cntlid": 3, 00:11:12.954 "qid": 0, 00:11:12.954 "state": "enabled", 00:11:12.954 "thread": "nvmf_tgt_poll_group_000", 00:11:12.954 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:12.954 "listen_address": { 00:11:12.954 "trtype": "TCP", 00:11:12.954 "adrfam": "IPv4", 00:11:12.954 "traddr": "10.0.0.3", 00:11:12.954 "trsvcid": "4420" 00:11:12.954 }, 00:11:12.954 "peer_address": { 00:11:12.954 "trtype": "TCP", 00:11:12.954 "adrfam": "IPv4", 00:11:12.954 "traddr": "10.0.0.1", 00:11:12.954 "trsvcid": "47926" 00:11:12.954 }, 00:11:12.954 "auth": { 00:11:12.954 "state": "completed", 00:11:12.954 "digest": "sha256", 00:11:12.954 "dhgroup": "null" 00:11:12.954 } 00:11:12.954 } 00:11:12.954 ]' 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:12.954 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:13.212 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:13.212 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:13.212 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.212 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:13.212 22:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:14.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.146 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:14.714 00:11:14.714 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.714 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.714 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.972 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.972 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.973 { 00:11:14.973 "cntlid": 5, 00:11:14.973 "qid": 0, 00:11:14.973 "state": "enabled", 00:11:14.973 "thread": "nvmf_tgt_poll_group_000", 00:11:14.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:14.973 "listen_address": { 00:11:14.973 "trtype": "TCP", 00:11:14.973 "adrfam": "IPv4", 00:11:14.973 "traddr": "10.0.0.3", 00:11:14.973 "trsvcid": "4420" 00:11:14.973 }, 00:11:14.973 "peer_address": { 00:11:14.973 "trtype": "TCP", 00:11:14.973 "adrfam": "IPv4", 00:11:14.973 "traddr": "10.0.0.1", 00:11:14.973 "trsvcid": "47948" 00:11:14.973 }, 00:11:14.973 "auth": { 00:11:14.973 "state": "completed", 00:11:14.973 "digest": "sha256", 00:11:14.973 "dhgroup": "null" 00:11:14.973 } 00:11:14.973 } 00:11:14.973 ]' 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.973 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:15.231 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:15.231 22:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:15.800 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.367 22:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:16.626 00:11:16.626 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.626 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.626 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.885 { 00:11:16.885 "cntlid": 7, 00:11:16.885 "qid": 0, 00:11:16.885 "state": "enabled", 00:11:16.885 "thread": "nvmf_tgt_poll_group_000", 00:11:16.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:16.885 "listen_address": { 00:11:16.885 "trtype": "TCP", 00:11:16.885 "adrfam": "IPv4", 00:11:16.885 "traddr": "10.0.0.3", 00:11:16.885 "trsvcid": "4420" 00:11:16.885 }, 00:11:16.885 "peer_address": { 00:11:16.885 "trtype": "TCP", 00:11:16.885 "adrfam": "IPv4", 00:11:16.885 "traddr": "10.0.0.1", 00:11:16.885 "trsvcid": "47984" 00:11:16.885 }, 00:11:16.885 "auth": { 00:11:16.885 "state": "completed", 00:11:16.885 "digest": "sha256", 00:11:16.885 "dhgroup": "null" 00:11:16.885 } 00:11:16.885 } 00:11:16.885 ]' 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.885 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:17.145 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:17.145 22:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:17.714 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.283 22:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:18.542 00:11:18.542 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.542 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.542 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.802 { 00:11:18.802 "cntlid": 9, 00:11:18.802 "qid": 0, 00:11:18.802 "state": "enabled", 00:11:18.802 "thread": "nvmf_tgt_poll_group_000", 00:11:18.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:18.802 "listen_address": { 00:11:18.802 "trtype": "TCP", 00:11:18.802 "adrfam": "IPv4", 00:11:18.802 "traddr": "10.0.0.3", 00:11:18.802 "trsvcid": "4420" 00:11:18.802 }, 00:11:18.802 "peer_address": { 00:11:18.802 "trtype": "TCP", 00:11:18.802 "adrfam": "IPv4", 00:11:18.802 "traddr": "10.0.0.1", 00:11:18.802 "trsvcid": "48002" 00:11:18.802 }, 00:11:18.802 "auth": { 00:11:18.802 "state": "completed", 00:11:18.802 "digest": "sha256", 00:11:18.802 "dhgroup": "ffdhe2048" 00:11:18.802 } 00:11:18.802 } 00:11:18.802 ]' 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.802 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.062 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:19.062 22:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:19.630 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.630 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:19.630 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.630 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.888 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.888 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.888 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:19.888 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.147 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:20.413 00:11:20.413 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.413 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.413 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.676 { 00:11:20.676 "cntlid": 11, 00:11:20.676 "qid": 0, 00:11:20.676 "state": "enabled", 00:11:20.676 "thread": "nvmf_tgt_poll_group_000", 00:11:20.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:20.676 "listen_address": { 00:11:20.676 "trtype": "TCP", 00:11:20.676 "adrfam": "IPv4", 00:11:20.676 "traddr": "10.0.0.3", 00:11:20.676 "trsvcid": "4420" 00:11:20.676 }, 00:11:20.676 "peer_address": { 00:11:20.676 "trtype": "TCP", 00:11:20.676 "adrfam": "IPv4", 00:11:20.676 "traddr": "10.0.0.1", 00:11:20.676 "trsvcid": "48018" 00:11:20.676 }, 00:11:20.676 "auth": { 00:11:20.676 "state": "completed", 00:11:20.676 "digest": "sha256", 00:11:20.676 "dhgroup": "ffdhe2048" 00:11:20.676 } 00:11:20.676 } 00:11:20.676 ]' 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.676 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:20.934 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:20.934 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:21.501 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.502 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:21.502 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.502 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.502 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.502 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.502 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.502 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.067 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.324 00:11:22.324 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.324 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.324 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.582 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.582 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.582 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.582 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.582 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.582 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.582 { 00:11:22.582 "cntlid": 13, 00:11:22.582 "qid": 0, 00:11:22.582 "state": "enabled", 00:11:22.582 "thread": "nvmf_tgt_poll_group_000", 00:11:22.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:22.582 "listen_address": { 00:11:22.582 "trtype": "TCP", 00:11:22.582 "adrfam": "IPv4", 00:11:22.582 "traddr": "10.0.0.3", 00:11:22.582 "trsvcid": "4420" 00:11:22.582 }, 00:11:22.582 "peer_address": { 00:11:22.582 "trtype": "TCP", 00:11:22.582 "adrfam": "IPv4", 00:11:22.583 "traddr": "10.0.0.1", 00:11:22.583 "trsvcid": "57180" 00:11:22.583 }, 00:11:22.583 "auth": { 00:11:22.583 "state": "completed", 00:11:22.583 "digest": "sha256", 00:11:22.583 "dhgroup": "ffdhe2048" 00:11:22.583 } 00:11:22.583 } 00:11:22.583 ]' 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.583 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.841 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:22.841 22:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:23.416 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:23.984 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:24.242 00:11:24.242 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.242 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.242 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.501 { 00:11:24.501 "cntlid": 15, 00:11:24.501 "qid": 0, 00:11:24.501 "state": "enabled", 00:11:24.501 "thread": "nvmf_tgt_poll_group_000", 00:11:24.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:24.501 "listen_address": { 00:11:24.501 "trtype": "TCP", 00:11:24.501 "adrfam": "IPv4", 00:11:24.501 "traddr": "10.0.0.3", 00:11:24.501 "trsvcid": "4420" 00:11:24.501 }, 00:11:24.501 "peer_address": { 00:11:24.501 "trtype": "TCP", 00:11:24.501 "adrfam": "IPv4", 00:11:24.501 "traddr": "10.0.0.1", 00:11:24.501 "trsvcid": "57200" 00:11:24.501 }, 00:11:24.501 "auth": { 00:11:24.501 "state": "completed", 00:11:24.501 "digest": "sha256", 00:11:24.501 "dhgroup": "ffdhe2048" 00:11:24.501 } 00:11:24.501 } 00:11:24.501 ]' 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.501 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.761 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:24.761 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:25.329 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.329 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:25.329 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.329 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:25.588 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.156 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.156 { 00:11:26.156 "cntlid": 17, 00:11:26.156 "qid": 0, 00:11:26.156 "state": "enabled", 00:11:26.156 "thread": "nvmf_tgt_poll_group_000", 00:11:26.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:26.156 "listen_address": { 00:11:26.156 "trtype": "TCP", 00:11:26.156 "adrfam": "IPv4", 00:11:26.156 "traddr": "10.0.0.3", 00:11:26.156 "trsvcid": "4420" 00:11:26.156 }, 00:11:26.156 "peer_address": { 00:11:26.156 "trtype": "TCP", 00:11:26.156 "adrfam": "IPv4", 00:11:26.156 "traddr": "10.0.0.1", 00:11:26.156 "trsvcid": "57246" 00:11:26.156 }, 00:11:26.156 "auth": { 00:11:26.156 "state": "completed", 00:11:26.156 "digest": "sha256", 00:11:26.156 "dhgroup": "ffdhe3072" 00:11:26.156 } 00:11:26.156 } 00:11:26.156 ]' 00:11:26.156 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.415 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.415 22:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.415 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:26.415 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.415 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.415 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.416 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.675 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:26.675 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.242 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:27.500 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.066 00:11:28.066 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.066 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.066 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.324 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.324 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.324 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.324 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.324 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.324 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.324 { 00:11:28.324 "cntlid": 19, 00:11:28.324 "qid": 0, 00:11:28.324 "state": "enabled", 00:11:28.324 "thread": "nvmf_tgt_poll_group_000", 00:11:28.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:28.324 "listen_address": { 00:11:28.324 "trtype": "TCP", 00:11:28.325 "adrfam": "IPv4", 00:11:28.325 "traddr": "10.0.0.3", 00:11:28.325 "trsvcid": "4420" 00:11:28.325 }, 00:11:28.325 "peer_address": { 00:11:28.325 "trtype": "TCP", 00:11:28.325 "adrfam": "IPv4", 00:11:28.325 "traddr": "10.0.0.1", 00:11:28.325 "trsvcid": "57276" 00:11:28.325 }, 00:11:28.325 "auth": { 00:11:28.325 "state": "completed", 00:11:28.325 "digest": "sha256", 00:11:28.325 "dhgroup": "ffdhe3072" 00:11:28.325 } 00:11:28.325 } 00:11:28.325 ]' 00:11:28.325 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.325 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.325 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.325 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:28.325 22:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.325 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.325 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.325 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.583 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:28.583 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.149 22:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.717 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:29.974 00:11:29.974 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.974 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.974 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.231 { 00:11:30.231 "cntlid": 21, 00:11:30.231 "qid": 0, 00:11:30.231 "state": "enabled", 00:11:30.231 "thread": "nvmf_tgt_poll_group_000", 00:11:30.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:30.231 "listen_address": { 00:11:30.231 "trtype": "TCP", 00:11:30.231 "adrfam": "IPv4", 00:11:30.231 "traddr": "10.0.0.3", 00:11:30.231 "trsvcid": "4420" 00:11:30.231 }, 00:11:30.231 "peer_address": { 00:11:30.231 "trtype": "TCP", 00:11:30.231 "adrfam": "IPv4", 00:11:30.231 "traddr": "10.0.0.1", 00:11:30.231 "trsvcid": "57296" 00:11:30.231 }, 00:11:30.231 "auth": { 00:11:30.231 "state": "completed", 00:11:30.231 "digest": "sha256", 00:11:30.231 "dhgroup": "ffdhe3072" 00:11:30.231 } 00:11:30.231 } 00:11:30.231 ]' 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.231 22:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.497 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:30.497 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.429 22:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:31.429 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:31.686 00:11:31.686 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.686 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.686 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.250 { 00:11:32.250 "cntlid": 23, 00:11:32.250 "qid": 0, 00:11:32.250 "state": "enabled", 00:11:32.250 "thread": "nvmf_tgt_poll_group_000", 00:11:32.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:32.250 "listen_address": { 00:11:32.250 "trtype": "TCP", 00:11:32.250 "adrfam": "IPv4", 00:11:32.250 "traddr": "10.0.0.3", 00:11:32.250 "trsvcid": "4420" 00:11:32.250 }, 00:11:32.250 "peer_address": { 00:11:32.250 "trtype": "TCP", 00:11:32.250 "adrfam": "IPv4", 00:11:32.250 "traddr": "10.0.0.1", 00:11:32.250 "trsvcid": "43816" 00:11:32.250 }, 00:11:32.250 "auth": { 00:11:32.250 "state": "completed", 00:11:32.250 "digest": "sha256", 00:11:32.250 "dhgroup": "ffdhe3072" 00:11:32.250 } 00:11:32.250 } 00:11:32.250 ]' 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.250 22:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.509 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:32.509 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:33.075 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.333 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.592 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.592 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.592 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.592 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:33.852 00:11:33.852 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.852 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.852 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.111 { 00:11:34.111 "cntlid": 25, 00:11:34.111 "qid": 0, 00:11:34.111 "state": "enabled", 00:11:34.111 "thread": "nvmf_tgt_poll_group_000", 00:11:34.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:34.111 "listen_address": { 00:11:34.111 "trtype": "TCP", 00:11:34.111 "adrfam": "IPv4", 00:11:34.111 "traddr": "10.0.0.3", 00:11:34.111 "trsvcid": "4420" 00:11:34.111 }, 00:11:34.111 "peer_address": { 00:11:34.111 "trtype": "TCP", 00:11:34.111 "adrfam": "IPv4", 00:11:34.111 "traddr": "10.0.0.1", 00:11:34.111 "trsvcid": "43840" 00:11:34.111 }, 00:11:34.111 "auth": { 00:11:34.111 "state": "completed", 00:11:34.111 "digest": "sha256", 00:11:34.111 "dhgroup": "ffdhe4096" 00:11:34.111 } 00:11:34.111 } 00:11:34.111 ]' 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.111 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.681 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:34.681 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:35.249 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:35.508 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.089 00:11:36.089 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.089 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.089 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.363 { 00:11:36.363 "cntlid": 27, 00:11:36.363 "qid": 0, 00:11:36.363 "state": "enabled", 00:11:36.363 "thread": "nvmf_tgt_poll_group_000", 00:11:36.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:36.363 "listen_address": { 00:11:36.363 "trtype": "TCP", 00:11:36.363 "adrfam": "IPv4", 00:11:36.363 "traddr": "10.0.0.3", 00:11:36.363 "trsvcid": "4420" 00:11:36.363 }, 00:11:36.363 "peer_address": { 00:11:36.363 "trtype": "TCP", 00:11:36.363 "adrfam": "IPv4", 00:11:36.363 "traddr": "10.0.0.1", 00:11:36.363 "trsvcid": "43866" 00:11:36.363 }, 00:11:36.363 "auth": { 00:11:36.363 "state": "completed", 00:11:36.363 "digest": "sha256", 00:11:36.363 "dhgroup": "ffdhe4096" 00:11:36.363 } 00:11:36.363 } 00:11:36.363 ]' 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:36.363 22:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.363 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.363 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.363 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.622 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:36.622 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.191 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:37.450 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.017 00:11:38.017 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.017 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.017 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.276 { 00:11:38.276 "cntlid": 29, 00:11:38.276 "qid": 0, 00:11:38.276 "state": "enabled", 00:11:38.276 "thread": "nvmf_tgt_poll_group_000", 00:11:38.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:38.276 "listen_address": { 00:11:38.276 "trtype": "TCP", 00:11:38.276 "adrfam": "IPv4", 00:11:38.276 "traddr": "10.0.0.3", 00:11:38.276 "trsvcid": "4420" 00:11:38.276 }, 00:11:38.276 "peer_address": { 00:11:38.276 "trtype": "TCP", 00:11:38.276 "adrfam": "IPv4", 00:11:38.276 "traddr": "10.0.0.1", 00:11:38.276 "trsvcid": "43908" 00:11:38.276 }, 00:11:38.276 "auth": { 00:11:38.276 "state": "completed", 00:11:38.276 "digest": "sha256", 00:11:38.276 "dhgroup": "ffdhe4096" 00:11:38.276 } 00:11:38.276 } 00:11:38.276 ]' 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.276 22:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:38.535 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:38.535 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.103 22:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.362 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:39.931 00:11:39.931 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.931 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.931 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.190 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.190 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.190 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.190 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.190 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.190 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.190 { 00:11:40.190 "cntlid": 31, 00:11:40.190 "qid": 0, 00:11:40.190 "state": "enabled", 00:11:40.190 "thread": "nvmf_tgt_poll_group_000", 00:11:40.191 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:40.191 "listen_address": { 00:11:40.191 "trtype": "TCP", 00:11:40.191 "adrfam": "IPv4", 00:11:40.191 "traddr": "10.0.0.3", 00:11:40.191 "trsvcid": "4420" 00:11:40.191 }, 00:11:40.191 "peer_address": { 00:11:40.191 "trtype": "TCP", 00:11:40.191 "adrfam": "IPv4", 00:11:40.191 "traddr": "10.0.0.1", 00:11:40.191 "trsvcid": "43934" 00:11:40.191 }, 00:11:40.191 "auth": { 00:11:40.191 "state": "completed", 00:11:40.191 "digest": "sha256", 00:11:40.191 "dhgroup": "ffdhe4096" 00:11:40.191 } 00:11:40.191 } 00:11:40.191 ]' 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.191 22:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.455 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:40.455 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:41.023 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.023 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:41.024 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.024 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.024 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.024 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:41.024 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.024 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.024 22:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.281 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.540 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.540 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.540 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.540 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.799 00:11:41.799 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.799 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.799 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.058 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.058 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.058 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.058 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.058 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.058 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.058 { 00:11:42.058 "cntlid": 33, 00:11:42.058 "qid": 0, 00:11:42.058 "state": "enabled", 00:11:42.058 "thread": "nvmf_tgt_poll_group_000", 00:11:42.058 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:42.058 "listen_address": { 00:11:42.058 "trtype": "TCP", 00:11:42.058 "adrfam": "IPv4", 00:11:42.058 "traddr": "10.0.0.3", 00:11:42.058 "trsvcid": "4420" 00:11:42.058 }, 00:11:42.058 "peer_address": { 00:11:42.058 "trtype": "TCP", 00:11:42.058 "adrfam": "IPv4", 00:11:42.058 "traddr": "10.0.0.1", 00:11:42.058 "trsvcid": "56526" 00:11:42.058 }, 00:11:42.058 "auth": { 00:11:42.058 "state": "completed", 00:11:42.058 "digest": "sha256", 00:11:42.058 "dhgroup": "ffdhe6144" 00:11:42.058 } 00:11:42.058 } 00:11:42.058 ]' 00:11:42.058 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.317 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.317 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.317 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:42.317 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.317 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.317 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.317 22:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.576 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:42.576 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.144 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.403 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.972 00:11:43.972 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.972 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.972 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.231 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.231 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.231 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.231 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.231 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.231 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.231 { 00:11:44.231 "cntlid": 35, 00:11:44.231 "qid": 0, 00:11:44.231 "state": "enabled", 00:11:44.231 "thread": "nvmf_tgt_poll_group_000", 00:11:44.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:44.231 "listen_address": { 00:11:44.231 "trtype": "TCP", 00:11:44.231 "adrfam": "IPv4", 00:11:44.231 "traddr": "10.0.0.3", 00:11:44.231 "trsvcid": "4420" 00:11:44.231 }, 00:11:44.231 "peer_address": { 00:11:44.231 "trtype": "TCP", 00:11:44.231 "adrfam": "IPv4", 00:11:44.231 "traddr": "10.0.0.1", 00:11:44.231 "trsvcid": "56542" 00:11:44.231 }, 00:11:44.231 "auth": { 00:11:44.231 "state": "completed", 00:11:44.231 "digest": "sha256", 00:11:44.231 "dhgroup": "ffdhe6144" 00:11:44.231 } 00:11:44.232 } 00:11:44.232 ]' 00:11:44.232 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.232 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.232 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.490 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:44.490 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.490 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.490 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.491 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.750 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:44.750 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.318 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.883 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:46.140 00:11:46.140 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.140 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.140 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.705 { 00:11:46.705 "cntlid": 37, 00:11:46.705 "qid": 0, 00:11:46.705 "state": "enabled", 00:11:46.705 "thread": "nvmf_tgt_poll_group_000", 00:11:46.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:46.705 "listen_address": { 00:11:46.705 "trtype": "TCP", 00:11:46.705 "adrfam": "IPv4", 00:11:46.705 "traddr": "10.0.0.3", 00:11:46.705 "trsvcid": "4420" 00:11:46.705 }, 00:11:46.705 "peer_address": { 00:11:46.705 "trtype": "TCP", 00:11:46.705 "adrfam": "IPv4", 00:11:46.705 "traddr": "10.0.0.1", 00:11:46.705 "trsvcid": "56572" 00:11:46.705 }, 00:11:46.705 "auth": { 00:11:46.705 "state": "completed", 00:11:46.705 "digest": "sha256", 00:11:46.705 "dhgroup": "ffdhe6144" 00:11:46.705 } 00:11:46.705 } 00:11:46.705 ]' 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.705 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.706 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.706 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.706 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.706 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.706 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.963 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:46.963 22:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.529 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.788 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:48.362 00:11:48.362 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.362 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.362 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.362 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.362 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.362 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.362 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.362 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.362 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.362 { 00:11:48.362 "cntlid": 39, 00:11:48.362 "qid": 0, 00:11:48.362 "state": "enabled", 00:11:48.362 "thread": "nvmf_tgt_poll_group_000", 00:11:48.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:48.362 "listen_address": { 00:11:48.362 "trtype": "TCP", 00:11:48.362 "adrfam": "IPv4", 00:11:48.362 "traddr": "10.0.0.3", 00:11:48.362 "trsvcid": "4420" 00:11:48.362 }, 00:11:48.362 "peer_address": { 00:11:48.362 "trtype": "TCP", 00:11:48.362 "adrfam": "IPv4", 00:11:48.362 "traddr": "10.0.0.1", 00:11:48.362 "trsvcid": "56598" 00:11:48.362 }, 00:11:48.362 "auth": { 00:11:48.362 "state": "completed", 00:11:48.362 "digest": "sha256", 00:11:48.362 "dhgroup": "ffdhe6144" 00:11:48.362 } 00:11:48.362 } 00:11:48.362 ]' 00:11:48.362 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.632 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.632 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.632 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:48.632 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.632 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.632 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.632 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.919 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:48.919 22:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.854 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.855 22:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.792 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.792 { 00:11:50.792 "cntlid": 41, 00:11:50.792 "qid": 0, 00:11:50.792 "state": "enabled", 00:11:50.792 "thread": "nvmf_tgt_poll_group_000", 00:11:50.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:50.792 "listen_address": { 00:11:50.792 "trtype": "TCP", 00:11:50.792 "adrfam": "IPv4", 00:11:50.792 "traddr": "10.0.0.3", 00:11:50.792 "trsvcid": "4420" 00:11:50.792 }, 00:11:50.792 "peer_address": { 00:11:50.792 "trtype": "TCP", 00:11:50.792 "adrfam": "IPv4", 00:11:50.792 "traddr": "10.0.0.1", 00:11:50.792 "trsvcid": "56638" 00:11:50.792 }, 00:11:50.792 "auth": { 00:11:50.792 "state": "completed", 00:11:50.792 "digest": "sha256", 00:11:50.792 "dhgroup": "ffdhe8192" 00:11:50.792 } 00:11:50.792 } 00:11:50.792 ]' 00:11:50.792 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.051 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.051 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.051 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:51.051 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:51.051 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:51.051 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:51.051 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:51.310 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:51.310 22:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:51.878 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.445 22:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.013 00:11:53.013 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.013 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.013 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:53.272 { 00:11:53.272 "cntlid": 43, 00:11:53.272 "qid": 0, 00:11:53.272 "state": "enabled", 00:11:53.272 "thread": "nvmf_tgt_poll_group_000", 00:11:53.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:53.272 "listen_address": { 00:11:53.272 "trtype": "TCP", 00:11:53.272 "adrfam": "IPv4", 00:11:53.272 "traddr": "10.0.0.3", 00:11:53.272 "trsvcid": "4420" 00:11:53.272 }, 00:11:53.272 "peer_address": { 00:11:53.272 "trtype": "TCP", 00:11:53.272 "adrfam": "IPv4", 00:11:53.272 "traddr": "10.0.0.1", 00:11:53.272 "trsvcid": "39132" 00:11:53.272 }, 00:11:53.272 "auth": { 00:11:53.272 "state": "completed", 00:11:53.272 "digest": "sha256", 00:11:53.272 "dhgroup": "ffdhe8192" 00:11:53.272 } 00:11:53.272 } 00:11:53.272 ]' 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:53.272 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:53.273 22:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.840 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:53.840 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:11:54.099 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.357 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:54.357 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.357 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.357 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.357 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.357 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.357 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.617 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.185 00:11:55.185 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.185 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.185 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.444 { 00:11:55.444 "cntlid": 45, 00:11:55.444 "qid": 0, 00:11:55.444 "state": "enabled", 00:11:55.444 "thread": "nvmf_tgt_poll_group_000", 00:11:55.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:55.444 "listen_address": { 00:11:55.444 "trtype": "TCP", 00:11:55.444 "adrfam": "IPv4", 00:11:55.444 "traddr": "10.0.0.3", 00:11:55.444 "trsvcid": "4420" 00:11:55.444 }, 00:11:55.444 "peer_address": { 00:11:55.444 "trtype": "TCP", 00:11:55.444 "adrfam": "IPv4", 00:11:55.444 "traddr": "10.0.0.1", 00:11:55.444 "trsvcid": "39150" 00:11:55.444 }, 00:11:55.444 "auth": { 00:11:55.444 "state": "completed", 00:11:55.444 "digest": "sha256", 00:11:55.444 "dhgroup": "ffdhe8192" 00:11:55.444 } 00:11:55.444 } 00:11:55.444 ]' 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.444 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.703 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.703 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.703 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.963 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:55.963 22:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.530 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.789 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.790 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:56.790 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.790 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.357 00:11:57.357 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.357 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.357 22:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.617 { 00:11:57.617 "cntlid": 47, 00:11:57.617 "qid": 0, 00:11:57.617 "state": "enabled", 00:11:57.617 "thread": "nvmf_tgt_poll_group_000", 00:11:57.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:57.617 "listen_address": { 00:11:57.617 "trtype": "TCP", 00:11:57.617 "adrfam": "IPv4", 00:11:57.617 "traddr": "10.0.0.3", 00:11:57.617 "trsvcid": "4420" 00:11:57.617 }, 00:11:57.617 "peer_address": { 00:11:57.617 "trtype": "TCP", 00:11:57.617 "adrfam": "IPv4", 00:11:57.617 "traddr": "10.0.0.1", 00:11:57.617 "trsvcid": "39186" 00:11:57.617 }, 00:11:57.617 "auth": { 00:11:57.617 "state": "completed", 00:11:57.617 "digest": "sha256", 00:11:57.617 "dhgroup": "ffdhe8192" 00:11:57.617 } 00:11:57.617 } 00:11:57.617 ]' 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.617 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.876 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:57.876 22:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:58.811 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.070 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.329 00:11:59.329 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.329 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.329 22:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.587 { 00:11:59.587 "cntlid": 49, 00:11:59.587 "qid": 0, 00:11:59.587 "state": "enabled", 00:11:59.587 "thread": "nvmf_tgt_poll_group_000", 00:11:59.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:11:59.587 "listen_address": { 00:11:59.587 "trtype": "TCP", 00:11:59.587 "adrfam": "IPv4", 00:11:59.587 "traddr": "10.0.0.3", 00:11:59.587 "trsvcid": "4420" 00:11:59.587 }, 00:11:59.587 "peer_address": { 00:11:59.587 "trtype": "TCP", 00:11:59.587 "adrfam": "IPv4", 00:11:59.587 "traddr": "10.0.0.1", 00:11:59.587 "trsvcid": "39222" 00:11:59.587 }, 00:11:59.587 "auth": { 00:11:59.587 "state": "completed", 00:11:59.587 "digest": "sha384", 00:11:59.587 "dhgroup": "null" 00:11:59.587 } 00:11:59.587 } 00:11:59.587 ]' 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:59.587 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.845 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.845 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.845 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.105 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:00.105 22:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:00.673 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.673 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:00.930 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.497 00:12:01.497 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.497 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.497 22:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:01.755 { 00:12:01.755 "cntlid": 51, 00:12:01.755 "qid": 0, 00:12:01.755 "state": "enabled", 00:12:01.755 "thread": "nvmf_tgt_poll_group_000", 00:12:01.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:01.755 "listen_address": { 00:12:01.755 "trtype": "TCP", 00:12:01.755 "adrfam": "IPv4", 00:12:01.755 "traddr": "10.0.0.3", 00:12:01.755 "trsvcid": "4420" 00:12:01.755 }, 00:12:01.755 "peer_address": { 00:12:01.755 "trtype": "TCP", 00:12:01.755 "adrfam": "IPv4", 00:12:01.755 "traddr": "10.0.0.1", 00:12:01.755 "trsvcid": "45928" 00:12:01.755 }, 00:12:01.755 "auth": { 00:12:01.755 "state": "completed", 00:12:01.755 "digest": "sha384", 00:12:01.755 "dhgroup": "null" 00:12:01.755 } 00:12:01.755 } 00:12:01.755 ]' 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:01.755 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:01.756 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:01.756 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:01.756 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:01.756 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:01.756 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:01.756 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.014 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:02.014 22:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:02.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:02.979 22:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.547 00:12:03.547 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:03.547 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:03.547 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:03.807 { 00:12:03.807 "cntlid": 53, 00:12:03.807 "qid": 0, 00:12:03.807 "state": "enabled", 00:12:03.807 "thread": "nvmf_tgt_poll_group_000", 00:12:03.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:03.807 "listen_address": { 00:12:03.807 "trtype": "TCP", 00:12:03.807 "adrfam": "IPv4", 00:12:03.807 "traddr": "10.0.0.3", 00:12:03.807 "trsvcid": "4420" 00:12:03.807 }, 00:12:03.807 "peer_address": { 00:12:03.807 "trtype": "TCP", 00:12:03.807 "adrfam": "IPv4", 00:12:03.807 "traddr": "10.0.0.1", 00:12:03.807 "trsvcid": "45964" 00:12:03.807 }, 00:12:03.807 "auth": { 00:12:03.807 "state": "completed", 00:12:03.807 "digest": "sha384", 00:12:03.807 "dhgroup": "null" 00:12:03.807 } 00:12:03.807 } 00:12:03.807 ]' 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:03.807 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.067 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:04.067 22:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:05.004 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.004 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:05.004 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.004 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.004 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.004 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.005 22:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:05.572 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.572 { 00:12:05.572 "cntlid": 55, 00:12:05.572 "qid": 0, 00:12:05.572 "state": "enabled", 00:12:05.572 "thread": "nvmf_tgt_poll_group_000", 00:12:05.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:05.572 "listen_address": { 00:12:05.572 "trtype": "TCP", 00:12:05.572 "adrfam": "IPv4", 00:12:05.572 "traddr": "10.0.0.3", 00:12:05.572 "trsvcid": "4420" 00:12:05.572 }, 00:12:05.572 "peer_address": { 00:12:05.572 "trtype": "TCP", 00:12:05.572 "adrfam": "IPv4", 00:12:05.572 "traddr": "10.0.0.1", 00:12:05.572 "trsvcid": "46000" 00:12:05.572 }, 00:12:05.572 "auth": { 00:12:05.572 "state": "completed", 00:12:05.572 "digest": "sha384", 00:12:05.572 "dhgroup": "null" 00:12:05.572 } 00:12:05.572 } 00:12:05.572 ]' 00:12:05.572 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.831 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:05.831 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.831 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:05.831 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.831 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.831 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.831 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:06.090 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:06.090 22:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:06.654 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.911 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:06.911 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.911 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.911 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.912 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:06.912 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.912 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:06.912 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.170 22:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:07.429 00:12:07.429 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.430 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.430 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.688 { 00:12:07.688 "cntlid": 57, 00:12:07.688 "qid": 0, 00:12:07.688 "state": "enabled", 00:12:07.688 "thread": "nvmf_tgt_poll_group_000", 00:12:07.688 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:07.688 "listen_address": { 00:12:07.688 "trtype": "TCP", 00:12:07.688 "adrfam": "IPv4", 00:12:07.688 "traddr": "10.0.0.3", 00:12:07.688 "trsvcid": "4420" 00:12:07.688 }, 00:12:07.688 "peer_address": { 00:12:07.688 "trtype": "TCP", 00:12:07.688 "adrfam": "IPv4", 00:12:07.688 "traddr": "10.0.0.1", 00:12:07.688 "trsvcid": "46028" 00:12:07.688 }, 00:12:07.688 "auth": { 00:12:07.688 "state": "completed", 00:12:07.688 "digest": "sha384", 00:12:07.688 "dhgroup": "ffdhe2048" 00:12:07.688 } 00:12:07.688 } 00:12:07.688 ]' 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:07.688 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.947 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:07.947 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.947 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.947 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.947 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:08.205 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:08.205 22:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:08.772 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.032 22:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:09.290 00:12:09.548 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.548 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.549 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.808 { 00:12:09.808 "cntlid": 59, 00:12:09.808 "qid": 0, 00:12:09.808 "state": "enabled", 00:12:09.808 "thread": "nvmf_tgt_poll_group_000", 00:12:09.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:09.808 "listen_address": { 00:12:09.808 "trtype": "TCP", 00:12:09.808 "adrfam": "IPv4", 00:12:09.808 "traddr": "10.0.0.3", 00:12:09.808 "trsvcid": "4420" 00:12:09.808 }, 00:12:09.808 "peer_address": { 00:12:09.808 "trtype": "TCP", 00:12:09.808 "adrfam": "IPv4", 00:12:09.808 "traddr": "10.0.0.1", 00:12:09.808 "trsvcid": "46060" 00:12:09.808 }, 00:12:09.808 "auth": { 00:12:09.808 "state": "completed", 00:12:09.808 "digest": "sha384", 00:12:09.808 "dhgroup": "ffdhe2048" 00:12:09.808 } 00:12:09.808 } 00:12:09.808 ]' 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.808 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:10.068 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:10.068 22:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:11.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.002 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.003 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.261 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.261 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.261 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.261 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:11.520 00:12:11.520 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.520 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.520 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.778 { 00:12:11.778 "cntlid": 61, 00:12:11.778 "qid": 0, 00:12:11.778 "state": "enabled", 00:12:11.778 "thread": "nvmf_tgt_poll_group_000", 00:12:11.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:11.778 "listen_address": { 00:12:11.778 "trtype": "TCP", 00:12:11.778 "adrfam": "IPv4", 00:12:11.778 "traddr": "10.0.0.3", 00:12:11.778 "trsvcid": "4420" 00:12:11.778 }, 00:12:11.778 "peer_address": { 00:12:11.778 "trtype": "TCP", 00:12:11.778 "adrfam": "IPv4", 00:12:11.778 "traddr": "10.0.0.1", 00:12:11.778 "trsvcid": "41520" 00:12:11.778 }, 00:12:11.778 "auth": { 00:12:11.778 "state": "completed", 00:12:11.778 "digest": "sha384", 00:12:11.778 "dhgroup": "ffdhe2048" 00:12:11.778 } 00:12:11.778 } 00:12:11.778 ]' 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:11.778 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:12.037 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:12.037 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:12.037 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:12.295 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:12.295 22:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.863 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.123 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.691 00:12:13.691 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.691 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.691 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.951 { 00:12:13.951 "cntlid": 63, 00:12:13.951 "qid": 0, 00:12:13.951 "state": "enabled", 00:12:13.951 "thread": "nvmf_tgt_poll_group_000", 00:12:13.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:13.951 "listen_address": { 00:12:13.951 "trtype": "TCP", 00:12:13.951 "adrfam": "IPv4", 00:12:13.951 "traddr": "10.0.0.3", 00:12:13.951 "trsvcid": "4420" 00:12:13.951 }, 00:12:13.951 "peer_address": { 00:12:13.951 "trtype": "TCP", 00:12:13.951 "adrfam": "IPv4", 00:12:13.951 "traddr": "10.0.0.1", 00:12:13.951 "trsvcid": "41538" 00:12:13.951 }, 00:12:13.951 "auth": { 00:12:13.951 "state": "completed", 00:12:13.951 "digest": "sha384", 00:12:13.951 "dhgroup": "ffdhe2048" 00:12:13.951 } 00:12:13.951 } 00:12:13.951 ]' 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.951 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:14.251 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:14.251 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:14.819 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.079 22:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:15.648 00:12:15.648 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.648 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.648 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.907 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.908 { 00:12:15.908 "cntlid": 65, 00:12:15.908 "qid": 0, 00:12:15.908 "state": "enabled", 00:12:15.908 "thread": "nvmf_tgt_poll_group_000", 00:12:15.908 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:15.908 "listen_address": { 00:12:15.908 "trtype": "TCP", 00:12:15.908 "adrfam": "IPv4", 00:12:15.908 "traddr": "10.0.0.3", 00:12:15.908 "trsvcid": "4420" 00:12:15.908 }, 00:12:15.908 "peer_address": { 00:12:15.908 "trtype": "TCP", 00:12:15.908 "adrfam": "IPv4", 00:12:15.908 "traddr": "10.0.0.1", 00:12:15.908 "trsvcid": "41568" 00:12:15.908 }, 00:12:15.908 "auth": { 00:12:15.908 "state": "completed", 00:12:15.908 "digest": "sha384", 00:12:15.908 "dhgroup": "ffdhe3072" 00:12:15.908 } 00:12:15.908 } 00:12:15.908 ]' 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.908 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:16.166 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:16.166 22:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:17.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.103 22:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:17.670 00:12:17.670 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.670 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.670 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.929 { 00:12:17.929 "cntlid": 67, 00:12:17.929 "qid": 0, 00:12:17.929 "state": "enabled", 00:12:17.929 "thread": "nvmf_tgt_poll_group_000", 00:12:17.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:17.929 "listen_address": { 00:12:17.929 "trtype": "TCP", 00:12:17.929 "adrfam": "IPv4", 00:12:17.929 "traddr": "10.0.0.3", 00:12:17.929 "trsvcid": "4420" 00:12:17.929 }, 00:12:17.929 "peer_address": { 00:12:17.929 "trtype": "TCP", 00:12:17.929 "adrfam": "IPv4", 00:12:17.929 "traddr": "10.0.0.1", 00:12:17.929 "trsvcid": "41592" 00:12:17.929 }, 00:12:17.929 "auth": { 00:12:17.929 "state": "completed", 00:12:17.929 "digest": "sha384", 00:12:17.929 "dhgroup": "ffdhe3072" 00:12:17.929 } 00:12:17.929 } 00:12:17.929 ]' 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.929 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.188 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:18.188 22:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:19.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.124 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:19.691 00:12:19.691 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.691 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.691 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.950 { 00:12:19.950 "cntlid": 69, 00:12:19.950 "qid": 0, 00:12:19.950 "state": "enabled", 00:12:19.950 "thread": "nvmf_tgt_poll_group_000", 00:12:19.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:19.950 "listen_address": { 00:12:19.950 "trtype": "TCP", 00:12:19.950 "adrfam": "IPv4", 00:12:19.950 "traddr": "10.0.0.3", 00:12:19.950 "trsvcid": "4420" 00:12:19.950 }, 00:12:19.950 "peer_address": { 00:12:19.950 "trtype": "TCP", 00:12:19.950 "adrfam": "IPv4", 00:12:19.950 "traddr": "10.0.0.1", 00:12:19.950 "trsvcid": "41620" 00:12:19.950 }, 00:12:19.950 "auth": { 00:12:19.950 "state": "completed", 00:12:19.950 "digest": "sha384", 00:12:19.950 "dhgroup": "ffdhe3072" 00:12:19.950 } 00:12:19.950 } 00:12:19.950 ]' 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:19.950 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:20.209 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:20.209 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:20.209 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:20.468 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:20.468 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:21.034 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:21.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:21.034 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:21.034 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.034 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.034 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.034 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:21.034 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.035 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.293 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:21.552 00:12:21.552 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.552 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.552 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.811 { 00:12:21.811 "cntlid": 71, 00:12:21.811 "qid": 0, 00:12:21.811 "state": "enabled", 00:12:21.811 "thread": "nvmf_tgt_poll_group_000", 00:12:21.811 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:21.811 "listen_address": { 00:12:21.811 "trtype": "TCP", 00:12:21.811 "adrfam": "IPv4", 00:12:21.811 "traddr": "10.0.0.3", 00:12:21.811 "trsvcid": "4420" 00:12:21.811 }, 00:12:21.811 "peer_address": { 00:12:21.811 "trtype": "TCP", 00:12:21.811 "adrfam": "IPv4", 00:12:21.811 "traddr": "10.0.0.1", 00:12:21.811 "trsvcid": "51820" 00:12:21.811 }, 00:12:21.811 "auth": { 00:12:21.811 "state": "completed", 00:12:21.811 "digest": "sha384", 00:12:21.811 "dhgroup": "ffdhe3072" 00:12:21.811 } 00:12:21.811 } 00:12:21.811 ]' 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.811 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:22.070 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:22.070 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:22.070 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:22.070 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:22.070 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:22.329 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:22.329 22:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:22.897 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.156 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:23.415 00:12:23.415 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.415 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.415 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.673 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.673 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.673 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.673 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.673 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.673 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.673 { 00:12:23.673 "cntlid": 73, 00:12:23.673 "qid": 0, 00:12:23.673 "state": "enabled", 00:12:23.673 "thread": "nvmf_tgt_poll_group_000", 00:12:23.673 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:23.673 "listen_address": { 00:12:23.674 "trtype": "TCP", 00:12:23.674 "adrfam": "IPv4", 00:12:23.674 "traddr": "10.0.0.3", 00:12:23.674 "trsvcid": "4420" 00:12:23.674 }, 00:12:23.674 "peer_address": { 00:12:23.674 "trtype": "TCP", 00:12:23.674 "adrfam": "IPv4", 00:12:23.674 "traddr": "10.0.0.1", 00:12:23.674 "trsvcid": "51848" 00:12:23.674 }, 00:12:23.674 "auth": { 00:12:23.674 "state": "completed", 00:12:23.674 "digest": "sha384", 00:12:23.674 "dhgroup": "ffdhe4096" 00:12:23.674 } 00:12:23.674 } 00:12:23.674 ]' 00:12:23.674 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.674 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.674 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.933 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:23.933 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.933 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.933 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.933 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.191 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:24.191 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:24.758 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.017 22:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:25.275 00:12:25.533 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.533 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.533 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.791 { 00:12:25.791 "cntlid": 75, 00:12:25.791 "qid": 0, 00:12:25.791 "state": "enabled", 00:12:25.791 "thread": "nvmf_tgt_poll_group_000", 00:12:25.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:25.791 "listen_address": { 00:12:25.791 "trtype": "TCP", 00:12:25.791 "adrfam": "IPv4", 00:12:25.791 "traddr": "10.0.0.3", 00:12:25.791 "trsvcid": "4420" 00:12:25.791 }, 00:12:25.791 "peer_address": { 00:12:25.791 "trtype": "TCP", 00:12:25.791 "adrfam": "IPv4", 00:12:25.791 "traddr": "10.0.0.1", 00:12:25.791 "trsvcid": "51874" 00:12:25.791 }, 00:12:25.791 "auth": { 00:12:25.791 "state": "completed", 00:12:25.791 "digest": "sha384", 00:12:25.791 "dhgroup": "ffdhe4096" 00:12:25.791 } 00:12:25.791 } 00:12:25.791 ]' 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.791 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:26.049 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:26.049 22:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:26.636 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.205 22:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:27.464 00:12:27.464 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.464 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.464 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.724 { 00:12:27.724 "cntlid": 77, 00:12:27.724 "qid": 0, 00:12:27.724 "state": "enabled", 00:12:27.724 "thread": "nvmf_tgt_poll_group_000", 00:12:27.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:27.724 "listen_address": { 00:12:27.724 "trtype": "TCP", 00:12:27.724 "adrfam": "IPv4", 00:12:27.724 "traddr": "10.0.0.3", 00:12:27.724 "trsvcid": "4420" 00:12:27.724 }, 00:12:27.724 "peer_address": { 00:12:27.724 "trtype": "TCP", 00:12:27.724 "adrfam": "IPv4", 00:12:27.724 "traddr": "10.0.0.1", 00:12:27.724 "trsvcid": "51906" 00:12:27.724 }, 00:12:27.724 "auth": { 00:12:27.724 "state": "completed", 00:12:27.724 "digest": "sha384", 00:12:27.724 "dhgroup": "ffdhe4096" 00:12:27.724 } 00:12:27.724 } 00:12:27.724 ]' 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:27.724 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.983 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.983 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.983 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:28.243 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:28.243 22:43:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.811 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.071 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:29.640 00:12:29.640 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.640 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.640 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.899 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.899 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.899 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.899 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.899 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.899 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.899 { 00:12:29.899 "cntlid": 79, 00:12:29.899 "qid": 0, 00:12:29.899 "state": "enabled", 00:12:29.899 "thread": "nvmf_tgt_poll_group_000", 00:12:29.899 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:29.899 "listen_address": { 00:12:29.899 "trtype": "TCP", 00:12:29.899 "adrfam": "IPv4", 00:12:29.899 "traddr": "10.0.0.3", 00:12:29.899 "trsvcid": "4420" 00:12:29.899 }, 00:12:29.899 "peer_address": { 00:12:29.899 "trtype": "TCP", 00:12:29.899 "adrfam": "IPv4", 00:12:29.899 "traddr": "10.0.0.1", 00:12:29.899 "trsvcid": "51924" 00:12:29.899 }, 00:12:29.899 "auth": { 00:12:29.899 "state": "completed", 00:12:29.899 "digest": "sha384", 00:12:29.899 "dhgroup": "ffdhe4096" 00:12:29.899 } 00:12:29.899 } 00:12:29.900 ]' 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.900 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:30.468 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:30.468 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:31.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:31.034 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.293 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:31.551 00:12:31.551 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.552 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.552 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.117 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.117 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:32.117 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.117 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.117 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.117 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:32.117 { 00:12:32.117 "cntlid": 81, 00:12:32.117 "qid": 0, 00:12:32.117 "state": "enabled", 00:12:32.117 "thread": "nvmf_tgt_poll_group_000", 00:12:32.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:32.117 "listen_address": { 00:12:32.117 "trtype": "TCP", 00:12:32.117 "adrfam": "IPv4", 00:12:32.117 "traddr": "10.0.0.3", 00:12:32.117 "trsvcid": "4420" 00:12:32.117 }, 00:12:32.118 "peer_address": { 00:12:32.118 "trtype": "TCP", 00:12:32.118 "adrfam": "IPv4", 00:12:32.118 "traddr": "10.0.0.1", 00:12:32.118 "trsvcid": "60106" 00:12:32.118 }, 00:12:32.118 "auth": { 00:12:32.118 "state": "completed", 00:12:32.118 "digest": "sha384", 00:12:32.118 "dhgroup": "ffdhe6144" 00:12:32.118 } 00:12:32.118 } 00:12:32.118 ]' 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:32.118 22:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:32.461 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:32.461 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:33.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:33.047 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.321 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:33.888 00:12:33.888 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.888 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.888 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:34.147 { 00:12:34.147 "cntlid": 83, 00:12:34.147 "qid": 0, 00:12:34.147 "state": "enabled", 00:12:34.147 "thread": "nvmf_tgt_poll_group_000", 00:12:34.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:34.147 "listen_address": { 00:12:34.147 "trtype": "TCP", 00:12:34.147 "adrfam": "IPv4", 00:12:34.147 "traddr": "10.0.0.3", 00:12:34.147 "trsvcid": "4420" 00:12:34.147 }, 00:12:34.147 "peer_address": { 00:12:34.147 "trtype": "TCP", 00:12:34.147 "adrfam": "IPv4", 00:12:34.147 "traddr": "10.0.0.1", 00:12:34.147 "trsvcid": "60130" 00:12:34.147 }, 00:12:34.147 "auth": { 00:12:34.147 "state": "completed", 00:12:34.147 "digest": "sha384", 00:12:34.147 "dhgroup": "ffdhe6144" 00:12:34.147 } 00:12:34.147 } 00:12:34.147 ]' 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:34.147 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:34.406 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:34.406 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:34.406 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:34.406 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:34.406 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:34.665 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:34.665 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:35.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.233 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:35.492 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:35.493 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.061 00:12:36.061 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:36.061 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:36.061 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:36.320 { 00:12:36.320 "cntlid": 85, 00:12:36.320 "qid": 0, 00:12:36.320 "state": "enabled", 00:12:36.320 "thread": "nvmf_tgt_poll_group_000", 00:12:36.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:36.320 "listen_address": { 00:12:36.320 "trtype": "TCP", 00:12:36.320 "adrfam": "IPv4", 00:12:36.320 "traddr": "10.0.0.3", 00:12:36.320 "trsvcid": "4420" 00:12:36.320 }, 00:12:36.320 "peer_address": { 00:12:36.320 "trtype": "TCP", 00:12:36.320 "adrfam": "IPv4", 00:12:36.320 "traddr": "10.0.0.1", 00:12:36.320 "trsvcid": "60152" 00:12:36.320 }, 00:12:36.320 "auth": { 00:12:36.320 "state": "completed", 00:12:36.320 "digest": "sha384", 00:12:36.320 "dhgroup": "ffdhe6144" 00:12:36.320 } 00:12:36.320 } 00:12:36.320 ]' 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:36.320 22:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:36.320 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:36.320 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:36.320 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:36.320 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:36.320 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.579 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:36.579 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:37.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:37.518 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:37.518 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.086 00:12:38.086 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:38.086 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:38.086 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:38.346 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:38.346 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:38.346 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.346 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.346 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.346 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:38.346 { 00:12:38.346 "cntlid": 87, 00:12:38.346 "qid": 0, 00:12:38.346 "state": "enabled", 00:12:38.346 "thread": "nvmf_tgt_poll_group_000", 00:12:38.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:38.346 "listen_address": { 00:12:38.346 "trtype": "TCP", 00:12:38.346 "adrfam": "IPv4", 00:12:38.346 "traddr": "10.0.0.3", 00:12:38.346 "trsvcid": "4420" 00:12:38.346 }, 00:12:38.346 "peer_address": { 00:12:38.346 "trtype": "TCP", 00:12:38.346 "adrfam": "IPv4", 00:12:38.346 "traddr": "10.0.0.1", 00:12:38.346 "trsvcid": "60174" 00:12:38.346 }, 00:12:38.346 "auth": { 00:12:38.346 "state": "completed", 00:12:38.346 "digest": "sha384", 00:12:38.346 "dhgroup": "ffdhe6144" 00:12:38.346 } 00:12:38.346 } 00:12:38.346 ]' 00:12:38.346 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:38.346 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:38.346 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:38.346 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:38.346 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:38.606 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:38.606 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:38.606 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.865 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:38.865 22:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:39.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:39.433 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:39.692 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.260 00:12:40.525 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.525 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:40.525 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:40.784 { 00:12:40.784 "cntlid": 89, 00:12:40.784 "qid": 0, 00:12:40.784 "state": "enabled", 00:12:40.784 "thread": "nvmf_tgt_poll_group_000", 00:12:40.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:40.784 "listen_address": { 00:12:40.784 "trtype": "TCP", 00:12:40.784 "adrfam": "IPv4", 00:12:40.784 "traddr": "10.0.0.3", 00:12:40.784 "trsvcid": "4420" 00:12:40.784 }, 00:12:40.784 "peer_address": { 00:12:40.784 "trtype": "TCP", 00:12:40.784 "adrfam": "IPv4", 00:12:40.784 "traddr": "10.0.0.1", 00:12:40.784 "trsvcid": "60198" 00:12:40.784 }, 00:12:40.784 "auth": { 00:12:40.784 "state": "completed", 00:12:40.784 "digest": "sha384", 00:12:40.784 "dhgroup": "ffdhe8192" 00:12:40.784 } 00:12:40.784 } 00:12:40.784 ]' 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:40.784 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.043 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:41.043 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:41.610 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:41.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:41.867 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:41.867 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.867 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.867 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.867 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:41.867 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:41.867 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.126 22:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.692 00:12:42.692 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:42.692 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:42.692 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:42.949 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:42.949 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:42.949 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.949 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.949 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.949 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:42.949 { 00:12:42.949 "cntlid": 91, 00:12:42.949 "qid": 0, 00:12:42.949 "state": "enabled", 00:12:42.949 "thread": "nvmf_tgt_poll_group_000", 00:12:42.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:42.949 "listen_address": { 00:12:42.949 "trtype": "TCP", 00:12:42.949 "adrfam": "IPv4", 00:12:42.950 "traddr": "10.0.0.3", 00:12:42.950 "trsvcid": "4420" 00:12:42.950 }, 00:12:42.950 "peer_address": { 00:12:42.950 "trtype": "TCP", 00:12:42.950 "adrfam": "IPv4", 00:12:42.950 "traddr": "10.0.0.1", 00:12:42.950 "trsvcid": "32806" 00:12:42.950 }, 00:12:42.950 "auth": { 00:12:42.950 "state": "completed", 00:12:42.950 "digest": "sha384", 00:12:42.950 "dhgroup": "ffdhe8192" 00:12:42.950 } 00:12:42.950 } 00:12:42.950 ]' 00:12:42.950 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:42.950 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:42.950 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.207 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:43.207 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.207 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.207 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.207 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.464 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:43.464 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.399 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.399 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:44.967 00:12:45.225 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.225 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.225 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.484 { 00:12:45.484 "cntlid": 93, 00:12:45.484 "qid": 0, 00:12:45.484 "state": "enabled", 00:12:45.484 "thread": "nvmf_tgt_poll_group_000", 00:12:45.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:45.484 "listen_address": { 00:12:45.484 "trtype": "TCP", 00:12:45.484 "adrfam": "IPv4", 00:12:45.484 "traddr": "10.0.0.3", 00:12:45.484 "trsvcid": "4420" 00:12:45.484 }, 00:12:45.484 "peer_address": { 00:12:45.484 "trtype": "TCP", 00:12:45.484 "adrfam": "IPv4", 00:12:45.484 "traddr": "10.0.0.1", 00:12:45.484 "trsvcid": "32836" 00:12:45.484 }, 00:12:45.484 "auth": { 00:12:45.484 "state": "completed", 00:12:45.484 "digest": "sha384", 00:12:45.484 "dhgroup": "ffdhe8192" 00:12:45.484 } 00:12:45.484 } 00:12:45.484 ]' 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.484 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.052 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:46.052 22:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.622 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.881 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.448 00:12:47.707 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.707 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.707 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.977 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.977 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.977 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.977 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.977 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.977 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.977 { 00:12:47.978 "cntlid": 95, 00:12:47.978 "qid": 0, 00:12:47.978 "state": "enabled", 00:12:47.978 "thread": "nvmf_tgt_poll_group_000", 00:12:47.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:47.978 "listen_address": { 00:12:47.978 "trtype": "TCP", 00:12:47.978 "adrfam": "IPv4", 00:12:47.978 "traddr": "10.0.0.3", 00:12:47.978 "trsvcid": "4420" 00:12:47.978 }, 00:12:47.978 "peer_address": { 00:12:47.978 "trtype": "TCP", 00:12:47.978 "adrfam": "IPv4", 00:12:47.978 "traddr": "10.0.0.1", 00:12:47.978 "trsvcid": "32876" 00:12:47.978 }, 00:12:47.978 "auth": { 00:12:47.978 "state": "completed", 00:12:47.978 "digest": "sha384", 00:12:47.978 "dhgroup": "ffdhe8192" 00:12:47.978 } 00:12:47.978 } 00:12:47.978 ]' 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.978 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.236 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:48.237 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:48.804 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.371 22:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.630 00:12:49.630 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.630 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.630 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.888 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.888 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.888 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.888 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.888 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.888 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.888 { 00:12:49.888 "cntlid": 97, 00:12:49.888 "qid": 0, 00:12:49.888 "state": "enabled", 00:12:49.889 "thread": "nvmf_tgt_poll_group_000", 00:12:49.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:49.889 "listen_address": { 00:12:49.889 "trtype": "TCP", 00:12:49.889 "adrfam": "IPv4", 00:12:49.889 "traddr": "10.0.0.3", 00:12:49.889 "trsvcid": "4420" 00:12:49.889 }, 00:12:49.889 "peer_address": { 00:12:49.889 "trtype": "TCP", 00:12:49.889 "adrfam": "IPv4", 00:12:49.889 "traddr": "10.0.0.1", 00:12:49.889 "trsvcid": "32890" 00:12:49.889 }, 00:12:49.889 "auth": { 00:12:49.889 "state": "completed", 00:12:49.889 "digest": "sha512", 00:12:49.889 "dhgroup": "null" 00:12:49.889 } 00:12:49.889 } 00:12:49.889 ]' 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:49.889 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.147 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:50.147 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:50.746 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.004 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.262 00:12:51.262 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.262 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.262 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:51.827 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:51.827 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:51.827 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.827 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.827 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.827 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:51.827 { 00:12:51.827 "cntlid": 99, 00:12:51.827 "qid": 0, 00:12:51.827 "state": "enabled", 00:12:51.827 "thread": "nvmf_tgt_poll_group_000", 00:12:51.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:51.828 "listen_address": { 00:12:51.828 "trtype": "TCP", 00:12:51.828 "adrfam": "IPv4", 00:12:51.828 "traddr": "10.0.0.3", 00:12:51.828 "trsvcid": "4420" 00:12:51.828 }, 00:12:51.828 "peer_address": { 00:12:51.828 "trtype": "TCP", 00:12:51.828 "adrfam": "IPv4", 00:12:51.828 "traddr": "10.0.0.1", 00:12:51.828 "trsvcid": "35894" 00:12:51.828 }, 00:12:51.828 "auth": { 00:12:51.828 "state": "completed", 00:12:51.828 "digest": "sha512", 00:12:51.828 "dhgroup": "null" 00:12:51.828 } 00:12:51.828 } 00:12:51.828 ]' 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:51.828 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.086 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:52.086 22:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:52.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.653 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:52.912 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:52.912 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:52.913 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.173 00:12:53.432 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:53.432 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.432 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:53.432 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.432 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.432 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.432 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.432 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.432 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.432 { 00:12:53.432 "cntlid": 101, 00:12:53.432 "qid": 0, 00:12:53.432 "state": "enabled", 00:12:53.432 "thread": "nvmf_tgt_poll_group_000", 00:12:53.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:53.432 "listen_address": { 00:12:53.432 "trtype": "TCP", 00:12:53.432 "adrfam": "IPv4", 00:12:53.432 "traddr": "10.0.0.3", 00:12:53.432 "trsvcid": "4420" 00:12:53.432 }, 00:12:53.432 "peer_address": { 00:12:53.432 "trtype": "TCP", 00:12:53.432 "adrfam": "IPv4", 00:12:53.432 "traddr": "10.0.0.1", 00:12:53.432 "trsvcid": "35916" 00:12:53.432 }, 00:12:53.432 "auth": { 00:12:53.432 "state": "completed", 00:12:53.432 "digest": "sha512", 00:12:53.432 "dhgroup": "null" 00:12:53.432 } 00:12:53.432 } 00:12:53.432 ]' 00:12:53.432 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.691 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:53.691 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.691 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:53.691 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.691 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.691 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.691 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.950 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:53.950 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:12:54.517 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:54.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:54.518 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:54.518 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.518 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.518 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.518 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:54.518 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.518 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.776 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:12:54.776 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.776 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.776 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:54.776 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:54.777 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:55.035 00:12:55.035 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:55.035 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:55.035 22:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:55.294 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.553 { 00:12:55.553 "cntlid": 103, 00:12:55.553 "qid": 0, 00:12:55.553 "state": "enabled", 00:12:55.553 "thread": "nvmf_tgt_poll_group_000", 00:12:55.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:55.553 "listen_address": { 00:12:55.553 "trtype": "TCP", 00:12:55.553 "adrfam": "IPv4", 00:12:55.553 "traddr": "10.0.0.3", 00:12:55.553 "trsvcid": "4420" 00:12:55.553 }, 00:12:55.553 "peer_address": { 00:12:55.553 "trtype": "TCP", 00:12:55.553 "adrfam": "IPv4", 00:12:55.553 "traddr": "10.0.0.1", 00:12:55.553 "trsvcid": "35934" 00:12:55.553 }, 00:12:55.553 "auth": { 00:12:55.553 "state": "completed", 00:12:55.553 "digest": "sha512", 00:12:55.553 "dhgroup": "null" 00:12:55.553 } 00:12:55.553 } 00:12:55.553 ]' 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.553 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.812 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:55.812 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:56.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.380 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:56.640 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:57.207 00:12:57.207 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:57.207 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:57.207 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:57.466 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:57.466 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:57.466 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.466 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:57.466 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.466 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:57.466 { 00:12:57.466 "cntlid": 105, 00:12:57.466 "qid": 0, 00:12:57.466 "state": "enabled", 00:12:57.466 "thread": "nvmf_tgt_poll_group_000", 00:12:57.466 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:57.466 "listen_address": { 00:12:57.466 "trtype": "TCP", 00:12:57.466 "adrfam": "IPv4", 00:12:57.466 "traddr": "10.0.0.3", 00:12:57.466 "trsvcid": "4420" 00:12:57.466 }, 00:12:57.466 "peer_address": { 00:12:57.466 "trtype": "TCP", 00:12:57.466 "adrfam": "IPv4", 00:12:57.466 "traddr": "10.0.0.1", 00:12:57.466 "trsvcid": "35962" 00:12:57.466 }, 00:12:57.466 "auth": { 00:12:57.466 "state": "completed", 00:12:57.466 "digest": "sha512", 00:12:57.466 "dhgroup": "ffdhe2048" 00:12:57.466 } 00:12:57.466 } 00:12:57.466 ]' 00:12:57.466 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:57.466 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:57.466 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:57.466 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:57.466 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.466 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.466 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.466 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.723 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:57.723 22:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:58.657 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:58.915 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:59.173 00:12:59.173 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:59.173 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:59.173 22:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:59.432 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:59.432 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:59.432 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.432 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.432 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.432 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:59.432 { 00:12:59.432 "cntlid": 107, 00:12:59.432 "qid": 0, 00:12:59.432 "state": "enabled", 00:12:59.432 "thread": "nvmf_tgt_poll_group_000", 00:12:59.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:12:59.432 "listen_address": { 00:12:59.432 "trtype": "TCP", 00:12:59.432 "adrfam": "IPv4", 00:12:59.432 "traddr": "10.0.0.3", 00:12:59.432 "trsvcid": "4420" 00:12:59.432 }, 00:12:59.432 "peer_address": { 00:12:59.432 "trtype": "TCP", 00:12:59.432 "adrfam": "IPv4", 00:12:59.432 "traddr": "10.0.0.1", 00:12:59.432 "trsvcid": "35986" 00:12:59.432 }, 00:12:59.432 "auth": { 00:12:59.432 "state": "completed", 00:12:59.432 "digest": "sha512", 00:12:59.432 "dhgroup": "ffdhe2048" 00:12:59.432 } 00:12:59.432 } 00:12:59.432 ]' 00:12:59.432 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:59.690 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:59.690 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:59.690 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:59.690 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.690 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.690 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.690 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.948 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:12:59.948 22:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:00.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:00.513 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.078 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:01.335 00:13:01.335 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:01.335 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:01.335 22:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:01.595 { 00:13:01.595 "cntlid": 109, 00:13:01.595 "qid": 0, 00:13:01.595 "state": "enabled", 00:13:01.595 "thread": "nvmf_tgt_poll_group_000", 00:13:01.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:01.595 "listen_address": { 00:13:01.595 "trtype": "TCP", 00:13:01.595 "adrfam": "IPv4", 00:13:01.595 "traddr": "10.0.0.3", 00:13:01.595 "trsvcid": "4420" 00:13:01.595 }, 00:13:01.595 "peer_address": { 00:13:01.595 "trtype": "TCP", 00:13:01.595 "adrfam": "IPv4", 00:13:01.595 "traddr": "10.0.0.1", 00:13:01.595 "trsvcid": "56070" 00:13:01.595 }, 00:13:01.595 "auth": { 00:13:01.595 "state": "completed", 00:13:01.595 "digest": "sha512", 00:13:01.595 "dhgroup": "ffdhe2048" 00:13:01.595 } 00:13:01.595 } 00:13:01.595 ]' 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.595 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:02.179 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:02.179 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:02.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.773 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.033 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.033 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:03.033 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.033 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:03.292 00:13:03.292 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:03.292 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:03.292 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:03.551 { 00:13:03.551 "cntlid": 111, 00:13:03.551 "qid": 0, 00:13:03.551 "state": "enabled", 00:13:03.551 "thread": "nvmf_tgt_poll_group_000", 00:13:03.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:03.551 "listen_address": { 00:13:03.551 "trtype": "TCP", 00:13:03.551 "adrfam": "IPv4", 00:13:03.551 "traddr": "10.0.0.3", 00:13:03.551 "trsvcid": "4420" 00:13:03.551 }, 00:13:03.551 "peer_address": { 00:13:03.551 "trtype": "TCP", 00:13:03.551 "adrfam": "IPv4", 00:13:03.551 "traddr": "10.0.0.1", 00:13:03.551 "trsvcid": "56098" 00:13:03.551 }, 00:13:03.551 "auth": { 00:13:03.551 "state": "completed", 00:13:03.551 "digest": "sha512", 00:13:03.551 "dhgroup": "ffdhe2048" 00:13:03.551 } 00:13:03.551 } 00:13:03.551 ]' 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:03.551 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:03.811 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:03.811 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:03.811 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:04.070 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:04.070 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:04.639 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:04.639 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:04.899 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.468 00:13:05.468 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:05.468 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:05.468 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:05.727 { 00:13:05.727 "cntlid": 113, 00:13:05.727 "qid": 0, 00:13:05.727 "state": "enabled", 00:13:05.727 "thread": "nvmf_tgt_poll_group_000", 00:13:05.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:05.727 "listen_address": { 00:13:05.727 "trtype": "TCP", 00:13:05.727 "adrfam": "IPv4", 00:13:05.727 "traddr": "10.0.0.3", 00:13:05.727 "trsvcid": "4420" 00:13:05.727 }, 00:13:05.727 "peer_address": { 00:13:05.727 "trtype": "TCP", 00:13:05.727 "adrfam": "IPv4", 00:13:05.727 "traddr": "10.0.0.1", 00:13:05.727 "trsvcid": "56128" 00:13:05.727 }, 00:13:05.727 "auth": { 00:13:05.727 "state": "completed", 00:13:05.727 "digest": "sha512", 00:13:05.727 "dhgroup": "ffdhe3072" 00:13:05.727 } 00:13:05.727 } 00:13:05.727 ]' 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.727 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.296 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:06.296 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:06.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:06.865 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.125 22:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:07.384 00:13:07.384 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:07.384 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:07.384 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:07.643 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:07.643 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:07.643 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.643 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.643 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.643 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:07.643 { 00:13:07.643 "cntlid": 115, 00:13:07.643 "qid": 0, 00:13:07.643 "state": "enabled", 00:13:07.643 "thread": "nvmf_tgt_poll_group_000", 00:13:07.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:07.643 "listen_address": { 00:13:07.643 "trtype": "TCP", 00:13:07.643 "adrfam": "IPv4", 00:13:07.643 "traddr": "10.0.0.3", 00:13:07.643 "trsvcid": "4420" 00:13:07.643 }, 00:13:07.643 "peer_address": { 00:13:07.643 "trtype": "TCP", 00:13:07.643 "adrfam": "IPv4", 00:13:07.643 "traddr": "10.0.0.1", 00:13:07.643 "trsvcid": "56140" 00:13:07.643 }, 00:13:07.643 "auth": { 00:13:07.643 "state": "completed", 00:13:07.643 "digest": "sha512", 00:13:07.643 "dhgroup": "ffdhe3072" 00:13:07.643 } 00:13:07.643 } 00:13:07.643 ]' 00:13:07.643 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.903 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.903 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.903 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:07.903 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.903 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.903 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.903 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.162 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:08.162 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:08.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:08.731 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.299 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:09.558 00:13:09.558 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:09.559 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:09.559 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:09.818 { 00:13:09.818 "cntlid": 117, 00:13:09.818 "qid": 0, 00:13:09.818 "state": "enabled", 00:13:09.818 "thread": "nvmf_tgt_poll_group_000", 00:13:09.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:09.818 "listen_address": { 00:13:09.818 "trtype": "TCP", 00:13:09.818 "adrfam": "IPv4", 00:13:09.818 "traddr": "10.0.0.3", 00:13:09.818 "trsvcid": "4420" 00:13:09.818 }, 00:13:09.818 "peer_address": { 00:13:09.818 "trtype": "TCP", 00:13:09.818 "adrfam": "IPv4", 00:13:09.818 "traddr": "10.0.0.1", 00:13:09.818 "trsvcid": "56182" 00:13:09.818 }, 00:13:09.818 "auth": { 00:13:09.818 "state": "completed", 00:13:09.818 "digest": "sha512", 00:13:09.818 "dhgroup": "ffdhe3072" 00:13:09.818 } 00:13:09.818 } 00:13:09.818 ]' 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:09.818 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.076 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.076 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.076 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:10.335 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:10.335 22:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:10.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.903 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.162 22:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.729 00:13:11.729 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:11.729 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:11.729 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:11.729 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:11.729 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:11.729 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.729 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:11.987 { 00:13:11.987 "cntlid": 119, 00:13:11.987 "qid": 0, 00:13:11.987 "state": "enabled", 00:13:11.987 "thread": "nvmf_tgt_poll_group_000", 00:13:11.987 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:11.987 "listen_address": { 00:13:11.987 "trtype": "TCP", 00:13:11.987 "adrfam": "IPv4", 00:13:11.987 "traddr": "10.0.0.3", 00:13:11.987 "trsvcid": "4420" 00:13:11.987 }, 00:13:11.987 "peer_address": { 00:13:11.987 "trtype": "TCP", 00:13:11.987 "adrfam": "IPv4", 00:13:11.987 "traddr": "10.0.0.1", 00:13:11.987 "trsvcid": "46538" 00:13:11.987 }, 00:13:11.987 "auth": { 00:13:11.987 "state": "completed", 00:13:11.987 "digest": "sha512", 00:13:11.987 "dhgroup": "ffdhe3072" 00:13:11.987 } 00:13:11.987 } 00:13:11.987 ]' 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:11.987 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:11.988 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:11.988 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.245 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:12.245 22:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:12.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:12.809 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.376 22:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.641 00:13:13.641 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:13.641 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:13.641 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:13.902 { 00:13:13.902 "cntlid": 121, 00:13:13.902 "qid": 0, 00:13:13.902 "state": "enabled", 00:13:13.902 "thread": "nvmf_tgt_poll_group_000", 00:13:13.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:13.902 "listen_address": { 00:13:13.902 "trtype": "TCP", 00:13:13.902 "adrfam": "IPv4", 00:13:13.902 "traddr": "10.0.0.3", 00:13:13.902 "trsvcid": "4420" 00:13:13.902 }, 00:13:13.902 "peer_address": { 00:13:13.902 "trtype": "TCP", 00:13:13.902 "adrfam": "IPv4", 00:13:13.902 "traddr": "10.0.0.1", 00:13:13.902 "trsvcid": "46548" 00:13:13.902 }, 00:13:13.902 "auth": { 00:13:13.902 "state": "completed", 00:13:13.902 "digest": "sha512", 00:13:13.902 "dhgroup": "ffdhe4096" 00:13:13.902 } 00:13:13.902 } 00:13:13.902 ]' 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.902 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.160 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:14.160 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:15.113 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.113 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:15.113 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.113 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.113 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.113 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.114 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:15.114 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.372 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:15.631 00:13:15.631 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:15.631 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:15.631 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:15.897 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:15.897 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:15.897 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.897 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.897 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.897 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:15.897 { 00:13:15.897 "cntlid": 123, 00:13:15.897 "qid": 0, 00:13:15.897 "state": "enabled", 00:13:15.897 "thread": "nvmf_tgt_poll_group_000", 00:13:15.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:15.897 "listen_address": { 00:13:15.897 "trtype": "TCP", 00:13:15.897 "adrfam": "IPv4", 00:13:15.897 "traddr": "10.0.0.3", 00:13:15.897 "trsvcid": "4420" 00:13:15.897 }, 00:13:15.897 "peer_address": { 00:13:15.897 "trtype": "TCP", 00:13:15.897 "adrfam": "IPv4", 00:13:15.897 "traddr": "10.0.0.1", 00:13:15.897 "trsvcid": "46576" 00:13:15.897 }, 00:13:15.897 "auth": { 00:13:15.897 "state": "completed", 00:13:15.897 "digest": "sha512", 00:13:15.897 "dhgroup": "ffdhe4096" 00:13:15.897 } 00:13:15.897 } 00:13:15.897 ]' 00:13:15.897 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.169 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.169 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.169 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:16.169 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.169 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.169 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.169 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.428 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:16.428 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:16.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:16.994 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.560 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.818 00:13:17.818 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:17.818 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:17.818 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.076 { 00:13:18.076 "cntlid": 125, 00:13:18.076 "qid": 0, 00:13:18.076 "state": "enabled", 00:13:18.076 "thread": "nvmf_tgt_poll_group_000", 00:13:18.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:18.076 "listen_address": { 00:13:18.076 "trtype": "TCP", 00:13:18.076 "adrfam": "IPv4", 00:13:18.076 "traddr": "10.0.0.3", 00:13:18.076 "trsvcid": "4420" 00:13:18.076 }, 00:13:18.076 "peer_address": { 00:13:18.076 "trtype": "TCP", 00:13:18.076 "adrfam": "IPv4", 00:13:18.076 "traddr": "10.0.0.1", 00:13:18.076 "trsvcid": "46600" 00:13:18.076 }, 00:13:18.076 "auth": { 00:13:18.076 "state": "completed", 00:13:18.076 "digest": "sha512", 00:13:18.076 "dhgroup": "ffdhe4096" 00:13:18.076 } 00:13:18.076 } 00:13:18.076 ]' 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:18.076 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.334 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.334 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.334 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.592 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:18.592 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:19.159 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.418 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.985 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:19.985 { 00:13:19.985 "cntlid": 127, 00:13:19.985 "qid": 0, 00:13:19.985 "state": "enabled", 00:13:19.985 "thread": "nvmf_tgt_poll_group_000", 00:13:19.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:19.985 "listen_address": { 00:13:19.985 "trtype": "TCP", 00:13:19.985 "adrfam": "IPv4", 00:13:19.985 "traddr": "10.0.0.3", 00:13:19.985 "trsvcid": "4420" 00:13:19.985 }, 00:13:19.985 "peer_address": { 00:13:19.985 "trtype": "TCP", 00:13:19.985 "adrfam": "IPv4", 00:13:19.985 "traddr": "10.0.0.1", 00:13:19.985 "trsvcid": "46638" 00:13:19.985 }, 00:13:19.985 "auth": { 00:13:19.985 "state": "completed", 00:13:19.985 "digest": "sha512", 00:13:19.985 "dhgroup": "ffdhe4096" 00:13:19.985 } 00:13:19.985 } 00:13:19.985 ]' 00:13:19.985 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.245 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.245 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.245 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:20.245 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.245 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.245 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.245 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.504 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:20.504 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.435 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.694 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:21.952 00:13:22.211 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.211 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.211 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.470 { 00:13:22.470 "cntlid": 129, 00:13:22.470 "qid": 0, 00:13:22.470 "state": "enabled", 00:13:22.470 "thread": "nvmf_tgt_poll_group_000", 00:13:22.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:22.470 "listen_address": { 00:13:22.470 "trtype": "TCP", 00:13:22.470 "adrfam": "IPv4", 00:13:22.470 "traddr": "10.0.0.3", 00:13:22.470 "trsvcid": "4420" 00:13:22.470 }, 00:13:22.470 "peer_address": { 00:13:22.470 "trtype": "TCP", 00:13:22.470 "adrfam": "IPv4", 00:13:22.470 "traddr": "10.0.0.1", 00:13:22.470 "trsvcid": "44218" 00:13:22.470 }, 00:13:22.470 "auth": { 00:13:22.470 "state": "completed", 00:13:22.470 "digest": "sha512", 00:13:22.470 "dhgroup": "ffdhe6144" 00:13:22.470 } 00:13:22.470 } 00:13:22.470 ]' 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:22.470 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.729 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.729 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.729 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.988 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:22.988 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:23.926 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.186 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.754 00:13:24.754 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.754 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.754 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.014 { 00:13:25.014 "cntlid": 131, 00:13:25.014 "qid": 0, 00:13:25.014 "state": "enabled", 00:13:25.014 "thread": "nvmf_tgt_poll_group_000", 00:13:25.014 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:25.014 "listen_address": { 00:13:25.014 "trtype": "TCP", 00:13:25.014 "adrfam": "IPv4", 00:13:25.014 "traddr": "10.0.0.3", 00:13:25.014 "trsvcid": "4420" 00:13:25.014 }, 00:13:25.014 "peer_address": { 00:13:25.014 "trtype": "TCP", 00:13:25.014 "adrfam": "IPv4", 00:13:25.014 "traddr": "10.0.0.1", 00:13:25.014 "trsvcid": "44242" 00:13:25.014 }, 00:13:25.014 "auth": { 00:13:25.014 "state": "completed", 00:13:25.014 "digest": "sha512", 00:13:25.014 "dhgroup": "ffdhe6144" 00:13:25.014 } 00:13:25.014 } 00:13:25.014 ]' 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.014 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.273 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:25.273 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.211 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.469 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:26.469 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.469 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.469 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.469 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:26.469 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.470 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.470 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.470 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.470 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.470 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.470 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.470 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:27.037 00:13:27.037 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.037 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.037 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.296 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.296 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.296 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.296 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.296 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.296 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.296 { 00:13:27.296 "cntlid": 133, 00:13:27.296 "qid": 0, 00:13:27.296 "state": "enabled", 00:13:27.296 "thread": "nvmf_tgt_poll_group_000", 00:13:27.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:27.296 "listen_address": { 00:13:27.296 "trtype": "TCP", 00:13:27.296 "adrfam": "IPv4", 00:13:27.296 "traddr": "10.0.0.3", 00:13:27.296 "trsvcid": "4420" 00:13:27.296 }, 00:13:27.296 "peer_address": { 00:13:27.296 "trtype": "TCP", 00:13:27.296 "adrfam": "IPv4", 00:13:27.296 "traddr": "10.0.0.1", 00:13:27.296 "trsvcid": "44274" 00:13:27.296 }, 00:13:27.296 "auth": { 00:13:27.296 "state": "completed", 00:13:27.296 "digest": "sha512", 00:13:27.296 "dhgroup": "ffdhe6144" 00:13:27.296 } 00:13:27.296 } 00:13:27.296 ]' 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.297 22:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.866 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:27.866 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:28.436 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.436 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:28.436 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.436 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.436 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.436 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.436 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.436 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:28.696 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.264 00:13:29.264 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.264 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.264 22:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.598 { 00:13:29.598 "cntlid": 135, 00:13:29.598 "qid": 0, 00:13:29.598 "state": "enabled", 00:13:29.598 "thread": "nvmf_tgt_poll_group_000", 00:13:29.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:29.598 "listen_address": { 00:13:29.598 "trtype": "TCP", 00:13:29.598 "adrfam": "IPv4", 00:13:29.598 "traddr": "10.0.0.3", 00:13:29.598 "trsvcid": "4420" 00:13:29.598 }, 00:13:29.598 "peer_address": { 00:13:29.598 "trtype": "TCP", 00:13:29.598 "adrfam": "IPv4", 00:13:29.598 "traddr": "10.0.0.1", 00:13:29.598 "trsvcid": "44294" 00:13:29.598 }, 00:13:29.598 "auth": { 00:13:29.598 "state": "completed", 00:13:29.598 "digest": "sha512", 00:13:29.598 "dhgroup": "ffdhe6144" 00:13:29.598 } 00:13:29.598 } 00:13:29.598 ]' 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:29.598 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.910 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.910 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.910 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.910 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:29.910 22:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:30.846 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.105 22:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.673 00:13:31.673 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.673 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.673 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.932 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.932 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.932 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.932 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.932 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.932 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.932 { 00:13:31.932 "cntlid": 137, 00:13:31.932 "qid": 0, 00:13:31.932 "state": "enabled", 00:13:31.932 "thread": "nvmf_tgt_poll_group_000", 00:13:31.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:31.932 "listen_address": { 00:13:31.932 "trtype": "TCP", 00:13:31.932 "adrfam": "IPv4", 00:13:31.932 "traddr": "10.0.0.3", 00:13:31.932 "trsvcid": "4420" 00:13:31.932 }, 00:13:31.932 "peer_address": { 00:13:31.932 "trtype": "TCP", 00:13:31.932 "adrfam": "IPv4", 00:13:31.932 "traddr": "10.0.0.1", 00:13:31.932 "trsvcid": "41466" 00:13:31.932 }, 00:13:31.932 "auth": { 00:13:31.932 "state": "completed", 00:13:31.932 "digest": "sha512", 00:13:31.932 "dhgroup": "ffdhe8192" 00:13:31.932 } 00:13:31.932 } 00:13:31.932 ]' 00:13:31.933 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.933 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.933 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.193 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:32.193 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.193 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.193 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.193 22:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.452 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:32.452 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:33.388 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.388 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:33.388 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.389 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.389 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.389 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.389 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.389 22:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.647 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.581 00:13:34.581 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.581 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.581 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.839 { 00:13:34.839 "cntlid": 139, 00:13:34.839 "qid": 0, 00:13:34.839 "state": "enabled", 00:13:34.839 "thread": "nvmf_tgt_poll_group_000", 00:13:34.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:34.839 "listen_address": { 00:13:34.839 "trtype": "TCP", 00:13:34.839 "adrfam": "IPv4", 00:13:34.839 "traddr": "10.0.0.3", 00:13:34.839 "trsvcid": "4420" 00:13:34.839 }, 00:13:34.839 "peer_address": { 00:13:34.839 "trtype": "TCP", 00:13:34.839 "adrfam": "IPv4", 00:13:34.839 "traddr": "10.0.0.1", 00:13:34.839 "trsvcid": "41486" 00:13:34.839 }, 00:13:34.839 "auth": { 00:13:34.839 "state": "completed", 00:13:34.839 "digest": "sha512", 00:13:34.839 "dhgroup": "ffdhe8192" 00:13:34.839 } 00:13:34.839 } 00:13:34.839 ]' 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.839 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.097 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:35.097 22:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: --dhchap-ctrl-secret DHHC-1:02:NzM4MzRkNGEwNmMyYTBjZmFlMmZmODc2MjYwNDRhMGMzZWVhYTdlNWEyNjM0OTZjAPJmNw==: 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.032 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.291 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.856 00:13:36.856 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.856 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.856 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:37.114 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:37.114 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:37.114 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.114 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.114 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.114 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.114 { 00:13:37.114 "cntlid": 141, 00:13:37.114 "qid": 0, 00:13:37.114 "state": "enabled", 00:13:37.114 "thread": "nvmf_tgt_poll_group_000", 00:13:37.114 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:37.114 "listen_address": { 00:13:37.114 "trtype": "TCP", 00:13:37.115 "adrfam": "IPv4", 00:13:37.115 "traddr": "10.0.0.3", 00:13:37.115 "trsvcid": "4420" 00:13:37.115 }, 00:13:37.115 "peer_address": { 00:13:37.115 "trtype": "TCP", 00:13:37.115 "adrfam": "IPv4", 00:13:37.115 "traddr": "10.0.0.1", 00:13:37.115 "trsvcid": "41504" 00:13:37.115 }, 00:13:37.115 "auth": { 00:13:37.115 "state": "completed", 00:13:37.115 "digest": "sha512", 00:13:37.115 "dhgroup": "ffdhe8192" 00:13:37.115 } 00:13:37.115 } 00:13:37.115 ]' 00:13:37.115 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.115 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.115 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.372 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:37.372 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.372 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.372 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.372 22:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.630 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:37.630 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:01:ZjQyYTJlZDczODdmNTc1OTdkYzgxN2FhN2ZlMjMzYziqnssh: 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.195 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.454 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:39.040 00:13:39.040 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.040 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.040 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.299 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.299 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.299 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.299 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.299 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.299 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.299 { 00:13:39.299 "cntlid": 143, 00:13:39.299 "qid": 0, 00:13:39.299 "state": "enabled", 00:13:39.299 "thread": "nvmf_tgt_poll_group_000", 00:13:39.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:39.299 "listen_address": { 00:13:39.299 "trtype": "TCP", 00:13:39.299 "adrfam": "IPv4", 00:13:39.299 "traddr": "10.0.0.3", 00:13:39.299 "trsvcid": "4420" 00:13:39.299 }, 00:13:39.299 "peer_address": { 00:13:39.299 "trtype": "TCP", 00:13:39.299 "adrfam": "IPv4", 00:13:39.299 "traddr": "10.0.0.1", 00:13:39.299 "trsvcid": "41520" 00:13:39.299 }, 00:13:39.299 "auth": { 00:13:39.299 "state": "completed", 00:13:39.299 "digest": "sha512", 00:13:39.299 "dhgroup": "ffdhe8192" 00:13:39.299 } 00:13:39.299 } 00:13:39.299 ]' 00:13:39.299 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.558 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.558 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.558 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.558 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.558 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.558 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.558 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.817 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:39.817 22:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:40.386 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.645 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.212 00:13:41.471 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.471 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.471 22:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.730 { 00:13:41.730 "cntlid": 145, 00:13:41.730 "qid": 0, 00:13:41.730 "state": "enabled", 00:13:41.730 "thread": "nvmf_tgt_poll_group_000", 00:13:41.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:41.730 "listen_address": { 00:13:41.730 "trtype": "TCP", 00:13:41.730 "adrfam": "IPv4", 00:13:41.730 "traddr": "10.0.0.3", 00:13:41.730 "trsvcid": "4420" 00:13:41.730 }, 00:13:41.730 "peer_address": { 00:13:41.730 "trtype": "TCP", 00:13:41.730 "adrfam": "IPv4", 00:13:41.730 "traddr": "10.0.0.1", 00:13:41.730 "trsvcid": "41530" 00:13:41.730 }, 00:13:41.730 "auth": { 00:13:41.730 "state": "completed", 00:13:41.730 "digest": "sha512", 00:13:41.730 "dhgroup": "ffdhe8192" 00:13:41.730 } 00:13:41.730 } 00:13:41.730 ]' 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.730 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.989 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:41.989 22:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:00:YmMwNmY5ZjAzYTBiOWM4ZTA2ODc5YjViZjg4NTgzZjhkMzBjMjQxOGViM2RlN2ZiACR8Hg==: --dhchap-ctrl-secret DHHC-1:03:ZWI1ZWEwNGJlYWJmZWFkNmQxYmEzNjk2NzgyNWYwY2ZhMWNjMDU0N2JjN2JlZjU4YTg5ZDRlNWU5ZGUyZTFhMUXNAGQ=: 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:42.925 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:43.494 request: 00:13:43.494 { 00:13:43.494 "name": "nvme0", 00:13:43.494 "trtype": "tcp", 00:13:43.494 "traddr": "10.0.0.3", 00:13:43.494 "adrfam": "ipv4", 00:13:43.494 "trsvcid": "4420", 00:13:43.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:43.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:43.494 "prchk_reftag": false, 00:13:43.494 "prchk_guard": false, 00:13:43.494 "hdgst": false, 00:13:43.494 "ddgst": false, 00:13:43.494 "dhchap_key": "key2", 00:13:43.494 "allow_unrecognized_csi": false, 00:13:43.494 "method": "bdev_nvme_attach_controller", 00:13:43.494 "req_id": 1 00:13:43.494 } 00:13:43.494 Got JSON-RPC error response 00:13:43.494 response: 00:13:43.494 { 00:13:43.494 "code": -5, 00:13:43.494 "message": "Input/output error" 00:13:43.494 } 00:13:43.494 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:43.494 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:43.494 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:43.494 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:43.494 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:43.494 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.494 22:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:43.494 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:44.091 request: 00:13:44.091 { 00:13:44.091 "name": "nvme0", 00:13:44.092 "trtype": "tcp", 00:13:44.092 "traddr": "10.0.0.3", 00:13:44.092 "adrfam": "ipv4", 00:13:44.092 "trsvcid": "4420", 00:13:44.092 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:44.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:44.092 "prchk_reftag": false, 00:13:44.092 "prchk_guard": false, 00:13:44.092 "hdgst": false, 00:13:44.092 "ddgst": false, 00:13:44.092 "dhchap_key": "key1", 00:13:44.092 "dhchap_ctrlr_key": "ckey2", 00:13:44.092 "allow_unrecognized_csi": false, 00:13:44.092 "method": "bdev_nvme_attach_controller", 00:13:44.092 "req_id": 1 00:13:44.092 } 00:13:44.092 Got JSON-RPC error response 00:13:44.092 response: 00:13:44.092 { 00:13:44.092 "code": -5, 00:13:44.092 "message": "Input/output error" 00:13:44.092 } 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.092 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:44.659 request: 00:13:44.659 { 00:13:44.659 "name": "nvme0", 00:13:44.659 "trtype": "tcp", 00:13:44.659 "traddr": "10.0.0.3", 00:13:44.659 "adrfam": "ipv4", 00:13:44.659 "trsvcid": "4420", 00:13:44.659 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:44.659 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:44.659 "prchk_reftag": false, 00:13:44.659 "prchk_guard": false, 00:13:44.659 "hdgst": false, 00:13:44.659 "ddgst": false, 00:13:44.659 "dhchap_key": "key1", 00:13:44.659 "dhchap_ctrlr_key": "ckey1", 00:13:44.660 "allow_unrecognized_csi": false, 00:13:44.660 "method": "bdev_nvme_attach_controller", 00:13:44.660 "req_id": 1 00:13:44.660 } 00:13:44.660 Got JSON-RPC error response 00:13:44.660 response: 00:13:44.660 { 00:13:44.660 "code": -5, 00:13:44.660 "message": "Input/output error" 00:13:44.660 } 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 79110 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79110 ']' 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79110 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79110 00:13:44.660 killing process with pid 79110 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79110' 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79110 00:13:44.660 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79110 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=82152 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 82152 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82152 ']' 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.919 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.177 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.177 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:45.177 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:45.177 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82152 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82152 ']' 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.178 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.436 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.436 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:45.436 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:45.436 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.436 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.436 null0 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1N3 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bfY ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bfY 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.6ZD 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.EpG ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.EpG 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.a0v 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.f1q ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.f1q 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LBw 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.695 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.631 nvme0n1 00:13:46.631 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.631 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.631 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.890 { 00:13:46.890 "cntlid": 1, 00:13:46.890 "qid": 0, 00:13:46.890 "state": "enabled", 00:13:46.890 "thread": "nvmf_tgt_poll_group_000", 00:13:46.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:46.890 "listen_address": { 00:13:46.890 "trtype": "TCP", 00:13:46.890 "adrfam": "IPv4", 00:13:46.890 "traddr": "10.0.0.3", 00:13:46.890 "trsvcid": "4420" 00:13:46.890 }, 00:13:46.890 "peer_address": { 00:13:46.890 "trtype": "TCP", 00:13:46.890 "adrfam": "IPv4", 00:13:46.890 "traddr": "10.0.0.1", 00:13:46.890 "trsvcid": "55930" 00:13:46.890 }, 00:13:46.890 "auth": { 00:13:46.890 "state": "completed", 00:13:46.890 "digest": "sha512", 00:13:46.890 "dhgroup": "ffdhe8192" 00:13:46.890 } 00:13:46.890 } 00:13:46.890 ]' 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.890 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.152 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:47.152 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.152 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.152 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.152 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.411 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:47.411 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key3 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:48.348 22:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.607 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:48.866 request: 00:13:48.866 { 00:13:48.866 "name": "nvme0", 00:13:48.866 "trtype": "tcp", 00:13:48.866 "traddr": "10.0.0.3", 00:13:48.866 "adrfam": "ipv4", 00:13:48.866 "trsvcid": "4420", 00:13:48.866 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:48.866 "prchk_reftag": false, 00:13:48.866 "prchk_guard": false, 00:13:48.866 "hdgst": false, 00:13:48.866 "ddgst": false, 00:13:48.866 "dhchap_key": "key3", 00:13:48.866 "allow_unrecognized_csi": false, 00:13:48.866 "method": "bdev_nvme_attach_controller", 00:13:48.866 "req_id": 1 00:13:48.866 } 00:13:48.866 Got JSON-RPC error response 00:13:48.866 response: 00:13:48.866 { 00:13:48.866 "code": -5, 00:13:48.866 "message": "Input/output error" 00:13:48.866 } 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:48.866 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.124 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.382 request: 00:13:49.382 { 00:13:49.382 "name": "nvme0", 00:13:49.382 "trtype": "tcp", 00:13:49.382 "traddr": "10.0.0.3", 00:13:49.382 "adrfam": "ipv4", 00:13:49.382 "trsvcid": "4420", 00:13:49.382 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:49.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:49.382 "prchk_reftag": false, 00:13:49.382 "prchk_guard": false, 00:13:49.382 "hdgst": false, 00:13:49.382 "ddgst": false, 00:13:49.382 "dhchap_key": "key3", 00:13:49.382 "allow_unrecognized_csi": false, 00:13:49.382 "method": "bdev_nvme_attach_controller", 00:13:49.382 "req_id": 1 00:13:49.382 } 00:13:49.382 Got JSON-RPC error response 00:13:49.382 response: 00:13:49.382 { 00:13:49.382 "code": -5, 00:13:49.382 "message": "Input/output error" 00:13:49.382 } 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:49.382 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:49.949 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:50.208 request: 00:13:50.208 { 00:13:50.208 "name": "nvme0", 00:13:50.208 "trtype": "tcp", 00:13:50.208 "traddr": "10.0.0.3", 00:13:50.208 "adrfam": "ipv4", 00:13:50.208 "trsvcid": "4420", 00:13:50.208 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:50.208 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:50.208 "prchk_reftag": false, 00:13:50.208 "prchk_guard": false, 00:13:50.208 "hdgst": false, 00:13:50.208 "ddgst": false, 00:13:50.208 "dhchap_key": "key0", 00:13:50.208 "dhchap_ctrlr_key": "key1", 00:13:50.208 "allow_unrecognized_csi": false, 00:13:50.208 "method": "bdev_nvme_attach_controller", 00:13:50.208 "req_id": 1 00:13:50.208 } 00:13:50.208 Got JSON-RPC error response 00:13:50.208 response: 00:13:50.208 { 00:13:50.208 "code": -5, 00:13:50.208 "message": "Input/output error" 00:13:50.208 } 00:13:50.208 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:50.208 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:50.208 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:50.208 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:50.208 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:50.208 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:50.208 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:50.776 nvme0n1 00:13:50.776 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:50.776 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.776 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:51.033 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.033 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.033 22:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.599 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 00:13:51.599 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.599 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.599 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.599 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:51.599 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:51.599 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:52.535 nvme0n1 00:13:52.535 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:52.535 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:52.535 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:52.794 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:53.361 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.361 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:53.361 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid 172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -l 0 --dhchap-secret DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: --dhchap-ctrl-secret DHHC-1:03:MWFhYzY1MTczYzI4MzM1NzMyZjczNzY3YWZmYTQwMWI0ZDVmYjI2YTQxN2ViNmU4YjI3ZTFiZmRkZTFjNTJkZllQ/PE=: 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.927 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:54.493 22:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:55.062 request: 00:13:55.062 { 00:13:55.062 "name": "nvme0", 00:13:55.062 "trtype": "tcp", 00:13:55.062 "traddr": "10.0.0.3", 00:13:55.062 "adrfam": "ipv4", 00:13:55.062 "trsvcid": "4420", 00:13:55.062 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:55.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3", 00:13:55.062 "prchk_reftag": false, 00:13:55.062 "prchk_guard": false, 00:13:55.062 "hdgst": false, 00:13:55.062 "ddgst": false, 00:13:55.062 "dhchap_key": "key1", 00:13:55.062 "allow_unrecognized_csi": false, 00:13:55.062 "method": "bdev_nvme_attach_controller", 00:13:55.062 "req_id": 1 00:13:55.062 } 00:13:55.062 Got JSON-RPC error response 00:13:55.062 response: 00:13:55.062 { 00:13:55.062 "code": -5, 00:13:55.062 "message": "Input/output error" 00:13:55.062 } 00:13:55.062 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:55.062 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.062 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.062 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.062 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.062 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.062 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.999 nvme0n1 00:13:55.999 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:55.999 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:56.000 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.259 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.259 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.259 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.519 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:13:56.519 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.519 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.519 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.519 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:56.519 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:56.519 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:56.778 nvme0n1 00:13:56.778 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:56.778 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.778 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:57.347 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.347 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.347 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: '' 2s 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: ]] 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MjdkMzBmNjVjZjliZDA3MDA0ZjNmNDQ4OGJhNDQxM2XV124Y: 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:57.347 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key1 --dhchap-ctrlr-key key2 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: 2s 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: ]] 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDQ0MGE3NmNhYTc4YTE0NTZmNGI0NWQ0N2U3YzI1MjVlNDA5ZDM0MDBkMzYxNzY2nWyuMg==: 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:59.919 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:01.828 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:02.763 nvme0n1 00:14:02.763 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:02.763 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.763 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.763 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.763 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:02.763 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:03.331 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:03.331 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.331 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.898 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:04.465 22:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:05.032 request: 00:14:05.032 { 00:14:05.032 "name": "nvme0", 00:14:05.032 "dhchap_key": "key1", 00:14:05.032 "dhchap_ctrlr_key": "key3", 00:14:05.032 "method": "bdev_nvme_set_keys", 00:14:05.032 "req_id": 1 00:14:05.032 } 00:14:05.032 Got JSON-RPC error response 00:14:05.032 response: 00:14:05.032 { 00:14:05.032 "code": -13, 00:14:05.032 "message": "Permission denied" 00:14:05.032 } 00:14:05.032 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:05.032 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.032 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.032 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.032 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:05.032 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.032 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:05.290 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:05.290 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:06.222 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:06.222 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:06.222 22:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:06.480 22:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:07.416 nvme0n1 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:07.416 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:07.983 request: 00:14:07.983 { 00:14:07.983 "name": "nvme0", 00:14:07.983 "dhchap_key": "key2", 00:14:07.983 "dhchap_ctrlr_key": "key0", 00:14:07.983 "method": "bdev_nvme_set_keys", 00:14:07.983 "req_id": 1 00:14:07.983 } 00:14:07.983 Got JSON-RPC error response 00:14:07.983 response: 00:14:07.983 { 00:14:07.983 "code": -13, 00:14:07.983 "message": "Permission denied" 00:14:07.983 } 00:14:08.242 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:08.242 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:08.242 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:08.242 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:08.242 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:08.242 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:08.242 22:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.501 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:08.501 22:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:09.437 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:09.437 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:09.437 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:09.695 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:09.695 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:09.695 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:09.695 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 79129 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79129 ']' 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79129 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79129 00:14:09.696 killing process with pid 79129 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79129' 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79129 00:14:09.696 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79129 00:14:09.954 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:09.954 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:09.954 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:10.212 rmmod nvme_tcp 00:14:10.212 rmmod nvme_fabrics 00:14:10.212 rmmod nvme_keyring 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 82152 ']' 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 82152 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82152 ']' 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82152 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82152 00:14:10.212 killing process with pid 82152 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82152' 00:14:10.212 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82152 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82152 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:10.213 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:10.471 22:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1N3 /tmp/spdk.key-sha256.6ZD /tmp/spdk.key-sha384.a0v /tmp/spdk.key-sha512.LBw /tmp/spdk.key-sha512.bfY /tmp/spdk.key-sha384.EpG /tmp/spdk.key-sha256.f1q '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:10.471 ************************************ 00:14:10.471 END TEST nvmf_auth_target 00:14:10.471 ************************************ 00:14:10.471 00:14:10.471 real 3m8.066s 00:14:10.471 user 7m31.841s 00:14:10.471 sys 0m28.386s 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.471 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.731 ************************************ 00:14:10.731 START TEST nvmf_bdevio_no_huge 00:14:10.731 ************************************ 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:10.731 * Looking for test storage... 00:14:10.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:10.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.731 --rc genhtml_branch_coverage=1 00:14:10.731 --rc genhtml_function_coverage=1 00:14:10.731 --rc genhtml_legend=1 00:14:10.731 --rc geninfo_all_blocks=1 00:14:10.731 --rc geninfo_unexecuted_blocks=1 00:14:10.731 00:14:10.731 ' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:10.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.731 --rc genhtml_branch_coverage=1 00:14:10.731 --rc genhtml_function_coverage=1 00:14:10.731 --rc genhtml_legend=1 00:14:10.731 --rc geninfo_all_blocks=1 00:14:10.731 --rc geninfo_unexecuted_blocks=1 00:14:10.731 00:14:10.731 ' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:10.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.731 --rc genhtml_branch_coverage=1 00:14:10.731 --rc genhtml_function_coverage=1 00:14:10.731 --rc genhtml_legend=1 00:14:10.731 --rc geninfo_all_blocks=1 00:14:10.731 --rc geninfo_unexecuted_blocks=1 00:14:10.731 00:14:10.731 ' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:10.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.731 --rc genhtml_branch_coverage=1 00:14:10.731 --rc genhtml_function_coverage=1 00:14:10.731 --rc genhtml_legend=1 00:14:10.731 --rc geninfo_all_blocks=1 00:14:10.731 --rc geninfo_unexecuted_blocks=1 00:14:10.731 00:14:10.731 ' 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.731 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.732 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:10.732 Cannot find device "nvmf_init_br" 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:10.732 Cannot find device "nvmf_init_br2" 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:10.732 Cannot find device "nvmf_tgt_br" 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:10.732 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.990 Cannot find device "nvmf_tgt_br2" 00:14:10.990 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:10.990 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:10.990 Cannot find device "nvmf_init_br" 00:14:10.990 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:10.990 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:10.990 Cannot find device "nvmf_init_br2" 00:14:10.990 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:10.990 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:10.990 Cannot find device "nvmf_tgt_br" 00:14:10.990 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:10.991 Cannot find device "nvmf_tgt_br2" 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:10.991 Cannot find device "nvmf_br" 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:10.991 Cannot find device "nvmf_init_if" 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:10.991 Cannot find device "nvmf_init_if2" 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:10.991 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:11.250 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:11.250 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:11.250 00:14:11.250 --- 10.0.0.3 ping statistics --- 00:14:11.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.250 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:11.250 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:11.250 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:14:11.250 00:14:11.250 --- 10.0.0.4 ping statistics --- 00:14:11.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.250 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:11.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:14:11.250 00:14:11.250 --- 10.0.0.1 ping statistics --- 00:14:11.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.250 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:11.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:11.250 00:14:11.250 --- 10.0.0.2 ping statistics --- 00:14:11.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.250 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=82796 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 82796 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82796 ']' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:11.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:11.250 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:11.250 [2024-12-07 22:45:25.916427] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:11.250 [2024-12-07 22:45:25.916547] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:11.509 [2024-12-07 22:45:26.060858] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:11.509 [2024-12-07 22:45:26.165858] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.509 [2024-12-07 22:45:26.165973] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.509 [2024-12-07 22:45:26.166006] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.509 [2024-12-07 22:45:26.166027] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.509 [2024-12-07 22:45:26.166043] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.509 [2024-12-07 22:45:26.166149] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:11.509 [2024-12-07 22:45:26.166811] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:11.509 [2024-12-07 22:45:26.166946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.509 [2024-12-07 22:45:26.166938] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:11.509 [2024-12-07 22:45:26.173371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:12.453 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:12.453 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:12.453 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:12.453 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.453 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.454 [2024-12-07 22:45:27.019166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.454 Malloc0 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:12.454 [2024-12-07 22:45:27.057758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:12.454 { 00:14:12.454 "params": { 00:14:12.454 "name": "Nvme$subsystem", 00:14:12.454 "trtype": "$TEST_TRANSPORT", 00:14:12.454 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:12.454 "adrfam": "ipv4", 00:14:12.454 "trsvcid": "$NVMF_PORT", 00:14:12.454 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:12.454 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:12.454 "hdgst": ${hdgst:-false}, 00:14:12.454 "ddgst": ${ddgst:-false} 00:14:12.454 }, 00:14:12.454 "method": "bdev_nvme_attach_controller" 00:14:12.454 } 00:14:12.454 EOF 00:14:12.454 )") 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:14:12.454 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:12.454 "params": { 00:14:12.454 "name": "Nvme1", 00:14:12.454 "trtype": "tcp", 00:14:12.454 "traddr": "10.0.0.3", 00:14:12.454 "adrfam": "ipv4", 00:14:12.454 "trsvcid": "4420", 00:14:12.454 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:12.454 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:12.454 "hdgst": false, 00:14:12.454 "ddgst": false 00:14:12.454 }, 00:14:12.454 "method": "bdev_nvme_attach_controller" 00:14:12.454 }' 00:14:12.454 [2024-12-07 22:45:27.117591] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:12.454 [2024-12-07 22:45:27.117687] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82832 ] 00:14:12.722 [2024-12-07 22:45:27.258532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:12.722 [2024-12-07 22:45:27.365234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.722 [2024-12-07 22:45:27.367905] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.722 [2024-12-07 22:45:27.367949] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.722 [2024-12-07 22:45:27.382135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:12.981 I/O targets: 00:14:12.981 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:12.981 00:14:12.981 00:14:12.981 CUnit - A unit testing framework for C - Version 2.1-3 00:14:12.981 http://cunit.sourceforge.net/ 00:14:12.981 00:14:12.982 00:14:12.982 Suite: bdevio tests on: Nvme1n1 00:14:12.982 Test: blockdev write read block ...passed 00:14:12.982 Test: blockdev write zeroes read block ...passed 00:14:12.982 Test: blockdev write zeroes read no split ...passed 00:14:12.982 Test: blockdev write zeroes read split ...passed 00:14:12.982 Test: blockdev write zeroes read split partial ...passed 00:14:12.982 Test: blockdev reset ...[2024-12-07 22:45:27.601375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:12.982 [2024-12-07 22:45:27.601485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x203b2d0 (9): Bad file descriptor 00:14:12.982 [2024-12-07 22:45:27.620191] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:12.982 passed 00:14:12.982 Test: blockdev write read 8 blocks ...passed 00:14:12.982 Test: blockdev write read size > 128k ...passed 00:14:12.982 Test: blockdev write read invalid size ...passed 00:14:12.982 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:12.982 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:12.982 Test: blockdev write read max offset ...passed 00:14:12.982 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:12.982 Test: blockdev writev readv 8 blocks ...passed 00:14:12.982 Test: blockdev writev readv 30 x 1block ...passed 00:14:12.982 Test: blockdev writev readv block ...passed 00:14:12.982 Test: blockdev writev readv size > 128k ...passed 00:14:12.982 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:12.982 Test: blockdev comparev and writev ...[2024-12-07 22:45:27.628023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.628073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.628095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.628106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.628611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.628641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.628659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.628670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.629000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.629030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.629048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.629059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.629469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.629499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.629518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:12.982 [2024-12-07 22:45:27.629528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:12.982 passed 00:14:12.982 Test: blockdev nvme passthru rw ...passed 00:14:12.982 Test: blockdev nvme passthru vendor specific ...[2024-12-07 22:45:27.630338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.982 [2024-12-07 22:45:27.630362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.630467] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.982 [2024-12-07 22:45:27.630489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.630591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.982 [2024-12-07 22:45:27.630612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:12.982 [2024-12-07 22:45:27.630723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:12.982 [2024-12-07 22:45:27.630752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:12.982 passed 00:14:12.982 Test: blockdev nvme admin passthru ...passed 00:14:12.982 Test: blockdev copy ...passed 00:14:12.982 00:14:12.982 Run Summary: Type Total Ran Passed Failed Inactive 00:14:12.982 suites 1 1 n/a 0 0 00:14:12.982 tests 23 23 23 0 0 00:14:12.982 asserts 152 152 152 0 n/a 00:14:12.982 00:14:12.982 Elapsed time = 0.164 seconds 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:13.241 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:13.500 rmmod nvme_tcp 00:14:13.500 rmmod nvme_fabrics 00:14:13.500 rmmod nvme_keyring 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 82796 ']' 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 82796 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82796 ']' 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82796 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82796 00:14:13.500 killing process with pid 82796 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82796' 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82796 00:14:13.500 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82796 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:13.759 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:14.017 00:14:14.017 real 0m3.449s 00:14:14.017 user 0m10.415s 00:14:14.017 sys 0m1.342s 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:14.017 ************************************ 00:14:14.017 END TEST nvmf_bdevio_no_huge 00:14:14.017 ************************************ 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:14.017 ************************************ 00:14:14.017 START TEST nvmf_tls 00:14:14.017 ************************************ 00:14:14.017 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:14.275 * Looking for test storage... 00:14:14.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:14.275 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:14.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.276 --rc genhtml_branch_coverage=1 00:14:14.276 --rc genhtml_function_coverage=1 00:14:14.276 --rc genhtml_legend=1 00:14:14.276 --rc geninfo_all_blocks=1 00:14:14.276 --rc geninfo_unexecuted_blocks=1 00:14:14.276 00:14:14.276 ' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:14.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.276 --rc genhtml_branch_coverage=1 00:14:14.276 --rc genhtml_function_coverage=1 00:14:14.276 --rc genhtml_legend=1 00:14:14.276 --rc geninfo_all_blocks=1 00:14:14.276 --rc geninfo_unexecuted_blocks=1 00:14:14.276 00:14:14.276 ' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:14.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.276 --rc genhtml_branch_coverage=1 00:14:14.276 --rc genhtml_function_coverage=1 00:14:14.276 --rc genhtml_legend=1 00:14:14.276 --rc geninfo_all_blocks=1 00:14:14.276 --rc geninfo_unexecuted_blocks=1 00:14:14.276 00:14:14.276 ' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:14.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:14.276 --rc genhtml_branch_coverage=1 00:14:14.276 --rc genhtml_function_coverage=1 00:14:14.276 --rc genhtml_legend=1 00:14:14.276 --rc geninfo_all_blocks=1 00:14:14.276 --rc geninfo_unexecuted_blocks=1 00:14:14.276 00:14:14.276 ' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:14.276 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:14.276 Cannot find device "nvmf_init_br" 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:14.276 Cannot find device "nvmf_init_br2" 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:14.276 Cannot find device "nvmf_tgt_br" 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.276 Cannot find device "nvmf_tgt_br2" 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:14.276 Cannot find device "nvmf_init_br" 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:14.276 Cannot find device "nvmf_init_br2" 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:14.276 Cannot find device "nvmf_tgt_br" 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:14.276 22:45:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:14.276 Cannot find device "nvmf_tgt_br2" 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:14.276 Cannot find device "nvmf_br" 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:14.276 Cannot find device "nvmf_init_if" 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:14.276 Cannot find device "nvmf_init_if2" 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:14.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:14.276 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:14.276 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:14.533 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:14.790 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:14.790 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:14.790 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:14:14.790 00:14:14.790 --- 10.0.0.3 ping statistics --- 00:14:14.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.790 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:14:14.790 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:14.790 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:14.790 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:14:14.790 00:14:14.790 --- 10.0.0.4 ping statistics --- 00:14:14.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.790 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:14.790 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:14.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:14:14.790 00:14:14.790 --- 10.0.0.1 ping statistics --- 00:14:14.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.790 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:14:14.790 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:14.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:14:14.791 00:14:14.791 --- 10.0.0.2 ping statistics --- 00:14:14.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.791 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83065 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83065 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83065 ']' 00:14:14.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:14.791 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:14.791 [2024-12-07 22:45:29.410938] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:14.791 [2024-12-07 22:45:29.411071] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.791 [2024-12-07 22:45:29.552654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.048 [2024-12-07 22:45:29.595558] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.048 [2024-12-07 22:45:29.595621] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.048 [2024-12-07 22:45:29.595636] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.048 [2024-12-07 22:45:29.595645] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.048 [2024-12-07 22:45:29.595654] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.048 [2024-12-07 22:45:29.595687] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:15.048 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:15.305 true 00:14:15.563 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:15.563 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:15.820 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:15.820 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:15.820 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:16.077 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:16.077 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:16.642 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:16.642 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:16.642 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:16.899 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:16.899 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:17.158 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:17.158 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:17.158 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.158 22:45:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:17.416 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:17.416 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:17.416 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:17.676 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.676 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:18.245 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:18.245 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:18.245 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:18.245 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.245 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:18.504 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.PcNjpxd77T 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.qqOLPJao6s 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PcNjpxd77T 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.qqOLPJao6s 00:14:18.763 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:19.022 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:19.281 [2024-12-07 22:45:33.862807] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.281 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.PcNjpxd77T 00:14:19.281 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.PcNjpxd77T 00:14:19.282 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:19.541 [2024-12-07 22:45:34.126311] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.541 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:19.800 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:20.059 [2024-12-07 22:45:34.598415] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:20.059 [2024-12-07 22:45:34.598870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:20.059 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:20.319 malloc0 00:14:20.319 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:20.577 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.PcNjpxd77T 00:14:20.577 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:21.144 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PcNjpxd77T 00:14:31.128 Initializing NVMe Controllers 00:14:31.128 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:31.128 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:31.128 Initialization complete. Launching workers. 00:14:31.128 ======================================================== 00:14:31.128 Latency(us) 00:14:31.128 Device Information : IOPS MiB/s Average min max 00:14:31.128 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10257.37 40.07 6240.84 1497.02 16375.60 00:14:31.128 ======================================================== 00:14:31.128 Total : 10257.37 40.07 6240.84 1497.02 16375.60 00:14:31.128 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PcNjpxd77T 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PcNjpxd77T 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83302 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83302 /var/tmp/bdevperf.sock 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83302 ']' 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:31.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:31.128 22:45:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:31.128 [2024-12-07 22:45:45.874404] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:31.128 [2024-12-07 22:45:45.874723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83302 ] 00:14:31.387 [2024-12-07 22:45:46.015449] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.387 [2024-12-07 22:45:46.056325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.387 [2024-12-07 22:45:46.088532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:31.387 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:31.387 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:31.387 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PcNjpxd77T 00:14:31.953 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:31.953 [2024-12-07 22:45:46.685739] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:32.211 TLSTESTn1 00:14:32.211 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:32.211 Running I/O for 10 seconds... 00:14:34.154 3921.00 IOPS, 15.32 MiB/s [2024-12-07T22:45:50.312Z] 3844.50 IOPS, 15.02 MiB/s [2024-12-07T22:45:51.248Z] 3831.00 IOPS, 14.96 MiB/s [2024-12-07T22:45:52.184Z] 3866.50 IOPS, 15.10 MiB/s [2024-12-07T22:45:53.121Z] 3888.60 IOPS, 15.19 MiB/s [2024-12-07T22:45:54.056Z] 3924.33 IOPS, 15.33 MiB/s [2024-12-07T22:45:54.992Z] 3960.71 IOPS, 15.47 MiB/s [2024-12-07T22:45:55.929Z] 4003.50 IOPS, 15.64 MiB/s [2024-12-07T22:45:57.301Z] 4033.67 IOPS, 15.76 MiB/s [2024-12-07T22:45:57.301Z] 4065.00 IOPS, 15.88 MiB/s 00:14:42.535 Latency(us) 00:14:42.535 [2024-12-07T22:45:57.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.535 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:42.535 Verification LBA range: start 0x0 length 0x2000 00:14:42.535 TLSTESTn1 : 10.01 4071.65 15.90 0.00 0.00 31383.21 4170.47 29312.47 00:14:42.535 [2024-12-07T22:45:57.301Z] =================================================================================================================== 00:14:42.535 [2024-12-07T22:45:57.301Z] Total : 4071.65 15.90 0.00 0.00 31383.21 4170.47 29312.47 00:14:42.535 { 00:14:42.535 "results": [ 00:14:42.535 { 00:14:42.535 "job": "TLSTESTn1", 00:14:42.535 "core_mask": "0x4", 00:14:42.535 "workload": "verify", 00:14:42.535 "status": "finished", 00:14:42.535 "verify_range": { 00:14:42.535 "start": 0, 00:14:42.535 "length": 8192 00:14:42.535 }, 00:14:42.535 "queue_depth": 128, 00:14:42.535 "io_size": 4096, 00:14:42.535 "runtime": 10.013879, 00:14:42.535 "iops": 4071.648958410622, 00:14:42.535 "mibps": 15.904878743791492, 00:14:42.535 "io_failed": 0, 00:14:42.535 "io_timeout": 0, 00:14:42.535 "avg_latency_us": 31383.21093379531, 00:14:42.535 "min_latency_us": 4170.472727272727, 00:14:42.535 "max_latency_us": 29312.465454545454 00:14:42.535 } 00:14:42.535 ], 00:14:42.535 "core_count": 1 00:14:42.535 } 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83302 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83302 ']' 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83302 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83302 00:14:42.535 killing process with pid 83302 00:14:42.535 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.535 00:14:42.535 Latency(us) 00:14:42.535 [2024-12-07T22:45:57.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.535 [2024-12-07T22:45:57.301Z] =================================================================================================================== 00:14:42.535 [2024-12-07T22:45:57.301Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83302' 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83302 00:14:42.535 22:45:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83302 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qqOLPJao6s 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qqOLPJao6s 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.535 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qqOLPJao6s 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qqOLPJao6s 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83429 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83429 /var/tmp/bdevperf.sock 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83429 ']' 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:42.536 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.536 [2024-12-07 22:45:57.179387] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:42.536 [2024-12-07 22:45:57.179682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83429 ] 00:14:42.794 [2024-12-07 22:45:57.315930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.794 [2024-12-07 22:45:57.349551] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.794 [2024-12-07 22:45:57.377363] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:42.794 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:42.794 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:42.794 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qqOLPJao6s 00:14:43.051 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.310 [2024-12-07 22:45:57.976485] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.310 [2024-12-07 22:45:57.984469] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:43.310 [2024-12-07 22:45:57.985289] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d7d30 (107): Transport endpoint is not connected 00:14:43.310 [2024-12-07 22:45:57.986284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d7d30 (9): Bad file descriptor 00:14:43.311 [2024-12-07 22:45:57.987278] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:43.311 [2024-12-07 22:45:57.987446] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:43.311 [2024-12-07 22:45:57.987604] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:43.311 [2024-12-07 22:45:57.987793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:43.311 request: 00:14:43.311 { 00:14:43.311 "name": "TLSTEST", 00:14:43.311 "trtype": "tcp", 00:14:43.311 "traddr": "10.0.0.3", 00:14:43.311 "adrfam": "ipv4", 00:14:43.311 "trsvcid": "4420", 00:14:43.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.311 "prchk_reftag": false, 00:14:43.311 "prchk_guard": false, 00:14:43.311 "hdgst": false, 00:14:43.311 "ddgst": false, 00:14:43.311 "psk": "key0", 00:14:43.311 "allow_unrecognized_csi": false, 00:14:43.311 "method": "bdev_nvme_attach_controller", 00:14:43.311 "req_id": 1 00:14:43.311 } 00:14:43.311 Got JSON-RPC error response 00:14:43.311 response: 00:14:43.311 { 00:14:43.311 "code": -5, 00:14:43.311 "message": "Input/output error" 00:14:43.311 } 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83429 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83429 ']' 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83429 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83429 00:14:43.311 killing process with pid 83429 00:14:43.311 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.311 00:14:43.311 Latency(us) 00:14:43.311 [2024-12-07T22:45:58.077Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.311 [2024-12-07T22:45:58.077Z] =================================================================================================================== 00:14:43.311 [2024-12-07T22:45:58.077Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83429' 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83429 00:14:43.311 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83429 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PcNjpxd77T 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PcNjpxd77T 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PcNjpxd77T 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PcNjpxd77T 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83450 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83450 /var/tmp/bdevperf.sock 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83450 ']' 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:43.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.570 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:43.570 [2024-12-07 22:45:58.222380] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:43.570 [2024-12-07 22:45:58.222651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83450 ] 00:14:43.829 [2024-12-07 22:45:58.351309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.829 [2024-12-07 22:45:58.383783] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.829 [2024-12-07 22:45:58.410655] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.829 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:43.829 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:43.829 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PcNjpxd77T 00:14:44.087 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:44.345 [2024-12-07 22:45:58.988922] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:44.345 [2024-12-07 22:45:58.994470] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:44.345 [2024-12-07 22:45:58.994513] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:44.345 [2024-12-07 22:45:58.994564] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:44.345 [2024-12-07 22:45:58.994656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x869d30 (107): Transport endpoint is not connected 00:14:44.345 [2024-12-07 22:45:58.995644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x869d30 (9): Bad file descriptor 00:14:44.345 [2024-12-07 22:45:58.996640] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:44.345 [2024-12-07 22:45:58.996665] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:44.345 [2024-12-07 22:45:58.996678] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:44.345 [2024-12-07 22:45:58.996689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:44.345 request: 00:14:44.345 { 00:14:44.345 "name": "TLSTEST", 00:14:44.345 "trtype": "tcp", 00:14:44.345 "traddr": "10.0.0.3", 00:14:44.345 "adrfam": "ipv4", 00:14:44.345 "trsvcid": "4420", 00:14:44.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.345 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:44.345 "prchk_reftag": false, 00:14:44.345 "prchk_guard": false, 00:14:44.345 "hdgst": false, 00:14:44.345 "ddgst": false, 00:14:44.345 "psk": "key0", 00:14:44.345 "allow_unrecognized_csi": false, 00:14:44.345 "method": "bdev_nvme_attach_controller", 00:14:44.345 "req_id": 1 00:14:44.345 } 00:14:44.345 Got JSON-RPC error response 00:14:44.345 response: 00:14:44.345 { 00:14:44.345 "code": -5, 00:14:44.345 "message": "Input/output error" 00:14:44.345 } 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83450 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83450 ']' 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83450 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83450 00:14:44.345 killing process with pid 83450 00:14:44.345 Received shutdown signal, test time was about 10.000000 seconds 00:14:44.345 00:14:44.345 Latency(us) 00:14:44.345 [2024-12-07T22:45:59.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.345 [2024-12-07T22:45:59.111Z] =================================================================================================================== 00:14:44.345 [2024-12-07T22:45:59.111Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83450' 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83450 00:14:44.345 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83450 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PcNjpxd77T 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PcNjpxd77T 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PcNjpxd77T 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.PcNjpxd77T 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83471 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83471 /var/tmp/bdevperf.sock 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83471 ']' 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:44.603 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.603 [2024-12-07 22:45:59.246387] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:44.603 [2024-12-07 22:45:59.246502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83471 ] 00:14:44.862 [2024-12-07 22:45:59.382838] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.862 [2024-12-07 22:45:59.423707] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.862 [2024-12-07 22:45:59.456315] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:44.862 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.862 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:44.862 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.PcNjpxd77T 00:14:45.120 22:45:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:45.378 [2024-12-07 22:46:00.090322] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.378 [2024-12-07 22:46:00.095408] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:45.378 [2024-12-07 22:46:00.095614] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:45.378 [2024-12-07 22:46:00.095795] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:45.378 [2024-12-07 22:46:00.096130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362d30 (107): Transport endpoint is not connected 00:14:45.378 [2024-12-07 22:46:00.097120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362d30 (9): Bad file descriptor 00:14:45.378 [2024-12-07 22:46:00.098115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:45.378 [2024-12-07 22:46:00.098280] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:45.379 [2024-12-07 22:46:00.098395] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:45.379 [2024-12-07 22:46:00.098522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:45.379 request: 00:14:45.379 { 00:14:45.379 "name": "TLSTEST", 00:14:45.379 "trtype": "tcp", 00:14:45.379 "traddr": "10.0.0.3", 00:14:45.379 "adrfam": "ipv4", 00:14:45.379 "trsvcid": "4420", 00:14:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:45.379 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:45.379 "prchk_reftag": false, 00:14:45.379 "prchk_guard": false, 00:14:45.379 "hdgst": false, 00:14:45.379 "ddgst": false, 00:14:45.379 "psk": "key0", 00:14:45.379 "allow_unrecognized_csi": false, 00:14:45.379 "method": "bdev_nvme_attach_controller", 00:14:45.379 "req_id": 1 00:14:45.379 } 00:14:45.379 Got JSON-RPC error response 00:14:45.379 response: 00:14:45.379 { 00:14:45.379 "code": -5, 00:14:45.379 "message": "Input/output error" 00:14:45.379 } 00:14:45.379 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83471 00:14:45.379 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83471 ']' 00:14:45.379 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83471 00:14:45.379 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:45.379 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:45.379 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83471 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83471' 00:14:45.638 killing process with pid 83471 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83471 00:14:45.638 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.638 00:14:45.638 Latency(us) 00:14:45.638 [2024-12-07T22:46:00.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.638 [2024-12-07T22:46:00.404Z] =================================================================================================================== 00:14:45.638 [2024-12-07T22:46:00.404Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83471 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83492 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83492 /var/tmp/bdevperf.sock 00:14:45.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83492 ']' 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.638 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.638 [2024-12-07 22:46:00.364979] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:45.638 [2024-12-07 22:46:00.365080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83492 ] 00:14:45.896 [2024-12-07 22:46:00.502824] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.896 [2024-12-07 22:46:00.538722] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.896 [2024-12-07 22:46:00.568328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.896 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:45.896 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:45.896 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:46.155 [2024-12-07 22:46:00.904200] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:46.155 [2024-12-07 22:46:00.904493] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:46.155 request: 00:14:46.155 { 00:14:46.155 "name": "key0", 00:14:46.155 "path": "", 00:14:46.155 "method": "keyring_file_add_key", 00:14:46.155 "req_id": 1 00:14:46.155 } 00:14:46.155 Got JSON-RPC error response 00:14:46.155 response: 00:14:46.155 { 00:14:46.155 "code": -1, 00:14:46.155 "message": "Operation not permitted" 00:14:46.155 } 00:14:46.413 22:46:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:46.413 [2024-12-07 22:46:01.156367] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.413 [2024-12-07 22:46:01.156661] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:46.413 request: 00:14:46.413 { 00:14:46.413 "name": "TLSTEST", 00:14:46.413 "trtype": "tcp", 00:14:46.413 "traddr": "10.0.0.3", 00:14:46.413 "adrfam": "ipv4", 00:14:46.413 "trsvcid": "4420", 00:14:46.413 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.413 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.413 "prchk_reftag": false, 00:14:46.413 "prchk_guard": false, 00:14:46.413 "hdgst": false, 00:14:46.413 "ddgst": false, 00:14:46.413 "psk": "key0", 00:14:46.413 "allow_unrecognized_csi": false, 00:14:46.413 "method": "bdev_nvme_attach_controller", 00:14:46.413 "req_id": 1 00:14:46.413 } 00:14:46.413 Got JSON-RPC error response 00:14:46.413 response: 00:14:46.413 { 00:14:46.413 "code": -126, 00:14:46.413 "message": "Required key not available" 00:14:46.413 } 00:14:46.413 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83492 00:14:46.413 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83492 ']' 00:14:46.413 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83492 00:14:46.413 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83492 00:14:46.672 killing process with pid 83492 00:14:46.672 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.672 00:14:46.672 Latency(us) 00:14:46.672 [2024-12-07T22:46:01.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.672 [2024-12-07T22:46:01.438Z] =================================================================================================================== 00:14:46.672 [2024-12-07T22:46:01.438Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83492' 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83492 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83492 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83065 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83065 ']' 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83065 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83065 00:14:46.672 killing process with pid 83065 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83065' 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83065 00:14:46.672 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83065 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:46.931 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.oJIX4EO56n 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.oJIX4EO56n 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83529 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83529 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83529 ']' 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:46.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:46.932 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.932 [2024-12-07 22:46:01.628406] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:46.932 [2024-12-07 22:46:01.628513] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.191 [2024-12-07 22:46:01.759111] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.191 [2024-12-07 22:46:01.791870] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.191 [2024-12-07 22:46:01.792176] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.191 [2024-12-07 22:46:01.792214] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.191 [2024-12-07 22:46:01.792223] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.191 [2024-12-07 22:46:01.792230] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.191 [2024-12-07 22:46:01.792263] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.191 [2024-12-07 22:46:01.819160] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.oJIX4EO56n 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oJIX4EO56n 00:14:47.191 22:46:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:47.450 [2024-12-07 22:46:02.198485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.709 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:47.968 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:47.968 [2024-12-07 22:46:02.726651] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:47.968 [2024-12-07 22:46:02.727099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:48.227 22:46:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:48.487 malloc0 00:14:48.487 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:48.746 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:14:49.005 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oJIX4EO56n 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oJIX4EO56n 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83577 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83577 /var/tmp/bdevperf.sock 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83577 ']' 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:49.264 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:49.264 [2024-12-07 22:46:03.887381] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:14:49.264 [2024-12-07 22:46:03.887705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83577 ] 00:14:49.264 [2024-12-07 22:46:04.023120] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.523 [2024-12-07 22:46:04.065725] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.523 [2024-12-07 22:46:04.099505] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.523 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.523 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:49.523 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:14:49.786 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:50.063 [2024-12-07 22:46:04.634161] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.063 TLSTESTn1 00:14:50.063 22:46:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:50.335 Running I/O for 10 seconds... 00:14:52.201 4372.00 IOPS, 17.08 MiB/s [2024-12-07T22:46:07.899Z] 4379.50 IOPS, 17.11 MiB/s [2024-12-07T22:46:08.833Z] 4405.00 IOPS, 17.21 MiB/s [2024-12-07T22:46:10.209Z] 4394.50 IOPS, 17.17 MiB/s [2024-12-07T22:46:11.145Z] 4384.00 IOPS, 17.12 MiB/s [2024-12-07T22:46:12.081Z] 4346.67 IOPS, 16.98 MiB/s [2024-12-07T22:46:13.016Z] 4373.00 IOPS, 17.08 MiB/s [2024-12-07T22:46:13.952Z] 4359.88 IOPS, 17.03 MiB/s [2024-12-07T22:46:14.889Z] 4336.11 IOPS, 16.94 MiB/s [2024-12-07T22:46:14.889Z] 4315.60 IOPS, 16.86 MiB/s 00:15:00.123 Latency(us) 00:15:00.123 [2024-12-07T22:46:14.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.123 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:00.123 Verification LBA range: start 0x0 length 0x2000 00:15:00.123 TLSTESTn1 : 10.01 4321.70 16.88 0.00 0.00 29567.18 4766.25 30980.65 00:15:00.123 [2024-12-07T22:46:14.889Z] =================================================================================================================== 00:15:00.123 [2024-12-07T22:46:14.889Z] Total : 4321.70 16.88 0.00 0.00 29567.18 4766.25 30980.65 00:15:00.123 { 00:15:00.123 "results": [ 00:15:00.123 { 00:15:00.123 "job": "TLSTESTn1", 00:15:00.123 "core_mask": "0x4", 00:15:00.123 "workload": "verify", 00:15:00.123 "status": "finished", 00:15:00.123 "verify_range": { 00:15:00.123 "start": 0, 00:15:00.123 "length": 8192 00:15:00.123 }, 00:15:00.123 "queue_depth": 128, 00:15:00.123 "io_size": 4096, 00:15:00.123 "runtime": 10.014814, 00:15:00.123 "iops": 4321.697836824528, 00:15:00.123 "mibps": 16.881632175095813, 00:15:00.123 "io_failed": 0, 00:15:00.123 "io_timeout": 0, 00:15:00.123 "avg_latency_us": 29567.177633519645, 00:15:00.123 "min_latency_us": 4766.254545454545, 00:15:00.124 "max_latency_us": 30980.654545454545 00:15:00.124 } 00:15:00.124 ], 00:15:00.124 "core_count": 1 00:15:00.124 } 00:15:00.124 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:00.124 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83577 00:15:00.124 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83577 ']' 00:15:00.124 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83577 00:15:00.124 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:00.124 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:00.124 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83577 00:15:00.383 killing process with pid 83577 00:15:00.383 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.383 00:15:00.383 Latency(us) 00:15:00.383 [2024-12-07T22:46:15.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.383 [2024-12-07T22:46:15.149Z] =================================================================================================================== 00:15:00.383 [2024-12-07T22:46:15.149Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:00.383 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:00.383 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:00.383 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83577' 00:15:00.383 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83577 00:15:00.383 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83577 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.oJIX4EO56n 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oJIX4EO56n 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oJIX4EO56n 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.oJIX4EO56n 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.oJIX4EO56n 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83707 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83707 /var/tmp/bdevperf.sock 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83707 ']' 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:00.383 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:00.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:00.384 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:00.384 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:00.384 [2024-12-07 22:46:15.099330] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:00.384 [2024-12-07 22:46:15.099648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83707 ] 00:15:00.642 [2024-12-07 22:46:15.233408] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.642 [2024-12-07 22:46:15.270376] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.642 [2024-12-07 22:46:15.300635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:00.642 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:00.642 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:00.642 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:15:00.899 [2024-12-07 22:46:15.605120] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oJIX4EO56n': 0100666 00:15:00.899 [2024-12-07 22:46:15.605170] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:00.899 request: 00:15:00.899 { 00:15:00.899 "name": "key0", 00:15:00.899 "path": "/tmp/tmp.oJIX4EO56n", 00:15:00.899 "method": "keyring_file_add_key", 00:15:00.899 "req_id": 1 00:15:00.899 } 00:15:00.899 Got JSON-RPC error response 00:15:00.899 response: 00:15:00.899 { 00:15:00.899 "code": -1, 00:15:00.899 "message": "Operation not permitted" 00:15:00.899 } 00:15:00.900 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:01.158 [2024-12-07 22:46:15.897339] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:01.158 [2024-12-07 22:46:15.897464] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:01.158 request: 00:15:01.158 { 00:15:01.158 "name": "TLSTEST", 00:15:01.158 "trtype": "tcp", 00:15:01.158 "traddr": "10.0.0.3", 00:15:01.158 "adrfam": "ipv4", 00:15:01.158 "trsvcid": "4420", 00:15:01.158 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.158 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.158 "prchk_reftag": false, 00:15:01.158 "prchk_guard": false, 00:15:01.158 "hdgst": false, 00:15:01.158 "ddgst": false, 00:15:01.158 "psk": "key0", 00:15:01.158 "allow_unrecognized_csi": false, 00:15:01.158 "method": "bdev_nvme_attach_controller", 00:15:01.158 "req_id": 1 00:15:01.158 } 00:15:01.158 Got JSON-RPC error response 00:15:01.158 response: 00:15:01.158 { 00:15:01.158 "code": -126, 00:15:01.158 "message": "Required key not available" 00:15:01.158 } 00:15:01.158 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83707 00:15:01.158 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83707 ']' 00:15:01.158 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83707 00:15:01.158 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:01.158 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.417 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83707 00:15:01.417 killing process with pid 83707 00:15:01.417 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.417 00:15:01.417 Latency(us) 00:15:01.417 [2024-12-07T22:46:16.183Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.417 [2024-12-07T22:46:16.183Z] =================================================================================================================== 00:15:01.417 [2024-12-07T22:46:16.183Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:01.417 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:01.417 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:01.417 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83707' 00:15:01.417 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83707 00:15:01.417 22:46:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83707 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83529 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83529 ']' 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83529 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83529 00:15:01.417 killing process with pid 83529 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83529' 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83529 00:15:01.417 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83529 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83733 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83733 00:15:01.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83733 ']' 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.676 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.676 [2024-12-07 22:46:16.335163] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:01.676 [2024-12-07 22:46:16.335441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.935 [2024-12-07 22:46:16.466360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.935 [2024-12-07 22:46:16.502861] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.935 [2024-12-07 22:46:16.503142] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.935 [2024-12-07 22:46:16.503386] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.935 [2024-12-07 22:46:16.503508] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.935 [2024-12-07 22:46:16.503598] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.935 [2024-12-07 22:46:16.503676] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.935 [2024-12-07 22:46:16.533726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.oJIX4EO56n 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.oJIX4EO56n 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.oJIX4EO56n 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oJIX4EO56n 00:15:01.935 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:02.194 [2024-12-07 22:46:16.898095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:02.194 22:46:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:02.453 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:02.712 [2024-12-07 22:46:17.406340] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:02.712 [2024-12-07 22:46:17.406596] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:02.712 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:02.971 malloc0 00:15:02.971 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:03.230 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:15:03.489 [2024-12-07 22:46:18.218763] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.oJIX4EO56n': 0100666 00:15:03.489 [2024-12-07 22:46:18.218857] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:03.489 request: 00:15:03.489 { 00:15:03.489 "name": "key0", 00:15:03.489 "path": "/tmp/tmp.oJIX4EO56n", 00:15:03.489 "method": "keyring_file_add_key", 00:15:03.489 "req_id": 1 00:15:03.489 } 00:15:03.489 Got JSON-RPC error response 00:15:03.489 response: 00:15:03.489 { 00:15:03.489 "code": -1, 00:15:03.489 "message": "Operation not permitted" 00:15:03.489 } 00:15:03.490 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:03.749 [2024-12-07 22:46:18.474896] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:03.749 [2024-12-07 22:46:18.475309] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:03.749 request: 00:15:03.750 { 00:15:03.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:03.750 "host": "nqn.2016-06.io.spdk:host1", 00:15:03.750 "psk": "key0", 00:15:03.750 "method": "nvmf_subsystem_add_host", 00:15:03.750 "req_id": 1 00:15:03.750 } 00:15:03.750 Got JSON-RPC error response 00:15:03.750 response: 00:15:03.750 { 00:15:03.750 "code": -32603, 00:15:03.750 "message": "Internal error" 00:15:03.750 } 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83733 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83733 ']' 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83733 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:03.750 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83733 00:15:04.009 killing process with pid 83733 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83733' 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83733 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83733 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.oJIX4EO56n 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83789 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83789 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83789 ']' 00:15:04.009 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.010 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.010 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.010 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.010 22:46:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.010 [2024-12-07 22:46:18.737908] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:04.010 [2024-12-07 22:46:18.738212] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.275 [2024-12-07 22:46:18.872891] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.275 [2024-12-07 22:46:18.908615] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.275 [2024-12-07 22:46:18.908672] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.275 [2024-12-07 22:46:18.908684] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.275 [2024-12-07 22:46:18.908693] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.275 [2024-12-07 22:46:18.908700] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.275 [2024-12-07 22:46:18.908729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.275 [2024-12-07 22:46:18.938452] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:04.275 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.275 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:04.275 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:04.275 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:04.275 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:04.545 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.545 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.oJIX4EO56n 00:15:04.545 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oJIX4EO56n 00:15:04.545 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:04.545 [2024-12-07 22:46:19.279680] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:04.545 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:05.114 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:05.114 [2024-12-07 22:46:19.827853] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:05.114 [2024-12-07 22:46:19.828152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:05.114 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:05.373 malloc0 00:15:05.373 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:05.632 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:15:05.891 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:06.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83838 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83838 /var/tmp/bdevperf.sock 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83838 ']' 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.150 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.409 [2024-12-07 22:46:20.916417] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:06.409 [2024-12-07 22:46:20.916703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83838 ] 00:15:06.409 [2024-12-07 22:46:21.047365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.409 [2024-12-07 22:46:21.090903] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.409 [2024-12-07 22:46:21.130697] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.668 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.668 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:06.668 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:15:06.668 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:06.927 [2024-12-07 22:46:21.667677] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.187 TLSTESTn1 00:15:07.187 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:07.446 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:07.446 "subsystems": [ 00:15:07.446 { 00:15:07.446 "subsystem": "keyring", 00:15:07.446 "config": [ 00:15:07.446 { 00:15:07.446 "method": "keyring_file_add_key", 00:15:07.446 "params": { 00:15:07.446 "name": "key0", 00:15:07.446 "path": "/tmp/tmp.oJIX4EO56n" 00:15:07.446 } 00:15:07.446 } 00:15:07.446 ] 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "subsystem": "iobuf", 00:15:07.446 "config": [ 00:15:07.446 { 00:15:07.446 "method": "iobuf_set_options", 00:15:07.446 "params": { 00:15:07.446 "small_pool_count": 8192, 00:15:07.446 "large_pool_count": 1024, 00:15:07.446 "small_bufsize": 8192, 00:15:07.446 "large_bufsize": 135168 00:15:07.446 } 00:15:07.446 } 00:15:07.446 ] 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "subsystem": "sock", 00:15:07.446 "config": [ 00:15:07.446 { 00:15:07.446 "method": "sock_set_default_impl", 00:15:07.446 "params": { 00:15:07.446 "impl_name": "uring" 00:15:07.446 } 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "method": "sock_impl_set_options", 00:15:07.446 "params": { 00:15:07.446 "impl_name": "ssl", 00:15:07.446 "recv_buf_size": 4096, 00:15:07.446 "send_buf_size": 4096, 00:15:07.446 "enable_recv_pipe": true, 00:15:07.446 "enable_quickack": false, 00:15:07.446 "enable_placement_id": 0, 00:15:07.446 "enable_zerocopy_send_server": true, 00:15:07.446 "enable_zerocopy_send_client": false, 00:15:07.446 "zerocopy_threshold": 0, 00:15:07.446 "tls_version": 0, 00:15:07.446 "enable_ktls": false 00:15:07.446 } 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "method": "sock_impl_set_options", 00:15:07.446 "params": { 00:15:07.446 "impl_name": "posix", 00:15:07.446 "recv_buf_size": 2097152, 00:15:07.446 "send_buf_size": 2097152, 00:15:07.446 "enable_recv_pipe": true, 00:15:07.446 "enable_quickack": false, 00:15:07.446 "enable_placement_id": 0, 00:15:07.446 "enable_zerocopy_send_server": true, 00:15:07.446 "enable_zerocopy_send_client": false, 00:15:07.446 "zerocopy_threshold": 0, 00:15:07.446 "tls_version": 0, 00:15:07.446 "enable_ktls": false 00:15:07.446 } 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "method": "sock_impl_set_options", 00:15:07.446 "params": { 00:15:07.446 "impl_name": "uring", 00:15:07.446 "recv_buf_size": 2097152, 00:15:07.446 "send_buf_size": 2097152, 00:15:07.446 "enable_recv_pipe": true, 00:15:07.446 "enable_quickack": false, 00:15:07.446 "enable_placement_id": 0, 00:15:07.446 "enable_zerocopy_send_server": false, 00:15:07.446 "enable_zerocopy_send_client": false, 00:15:07.446 "zerocopy_threshold": 0, 00:15:07.446 "tls_version": 0, 00:15:07.446 "enable_ktls": false 00:15:07.446 } 00:15:07.446 } 00:15:07.446 ] 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "subsystem": "vmd", 00:15:07.446 "config": [] 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "subsystem": "accel", 00:15:07.446 "config": [ 00:15:07.446 { 00:15:07.446 "method": "accel_set_options", 00:15:07.446 "params": { 00:15:07.446 "small_cache_size": 128, 00:15:07.446 "large_cache_size": 16, 00:15:07.446 "task_count": 2048, 00:15:07.446 "sequence_count": 2048, 00:15:07.446 "buf_count": 2048 00:15:07.446 } 00:15:07.446 } 00:15:07.446 ] 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "subsystem": "bdev", 00:15:07.446 "config": [ 00:15:07.447 { 00:15:07.447 "method": "bdev_set_options", 00:15:07.447 "params": { 00:15:07.447 "bdev_io_pool_size": 65535, 00:15:07.447 "bdev_io_cache_size": 256, 00:15:07.447 "bdev_auto_examine": true, 00:15:07.447 "iobuf_small_cache_size": 128, 00:15:07.447 "iobuf_large_cache_size": 16 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "bdev_raid_set_options", 00:15:07.447 "params": { 00:15:07.447 "process_window_size_kb": 1024, 00:15:07.447 "process_max_bandwidth_mb_sec": 0 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "bdev_iscsi_set_options", 00:15:07.447 "params": { 00:15:07.447 "timeout_sec": 30 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "bdev_nvme_set_options", 00:15:07.447 "params": { 00:15:07.447 "action_on_timeout": "none", 00:15:07.447 "timeout_us": 0, 00:15:07.447 "timeout_admin_us": 0, 00:15:07.447 "keep_alive_timeout_ms": 10000, 00:15:07.447 "arbitration_burst": 0, 00:15:07.447 "low_priority_weight": 0, 00:15:07.447 "medium_priority_weight": 0, 00:15:07.447 "high_priority_weight": 0, 00:15:07.447 "nvme_adminq_poll_period_us": 10000, 00:15:07.447 "nvme_ioq_poll_period_us": 0, 00:15:07.447 "io_queue_requests": 0, 00:15:07.447 "delay_cmd_submit": true, 00:15:07.447 "transport_retry_count": 4, 00:15:07.447 "bdev_retry_count": 3, 00:15:07.447 "transport_ack_timeout": 0, 00:15:07.447 "ctrlr_loss_timeout_sec": 0, 00:15:07.447 "reconnect_delay_sec": 0, 00:15:07.447 "fast_io_fail_timeout_sec": 0, 00:15:07.447 "disable_auto_failback": false, 00:15:07.447 "generate_uuids": false, 00:15:07.447 "transport_tos": 0, 00:15:07.447 "nvme_error_stat": false, 00:15:07.447 "rdma_srq_size": 0, 00:15:07.447 "io_path_stat": false, 00:15:07.447 "allow_accel_sequence": false, 00:15:07.447 "rdma_max_cq_size": 0, 00:15:07.447 "rdma_cm_event_timeout_ms": 0, 00:15:07.447 "dhchap_digests": [ 00:15:07.447 "sha256", 00:15:07.447 "sha384", 00:15:07.447 "sha512" 00:15:07.447 ], 00:15:07.447 "dhchap_dhgroups": [ 00:15:07.447 "null", 00:15:07.447 "ffdhe2048", 00:15:07.447 "ffdhe3072", 00:15:07.447 "ffdhe4096", 00:15:07.447 "ffdhe6144", 00:15:07.447 "ffdhe8192" 00:15:07.447 ] 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "bdev_nvme_set_hotplug", 00:15:07.447 "params": { 00:15:07.447 "period_us": 100000, 00:15:07.447 "enable": false 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "bdev_malloc_create", 00:15:07.447 "params": { 00:15:07.447 "name": "malloc0", 00:15:07.447 "num_blocks": 8192, 00:15:07.447 "block_size": 4096, 00:15:07.447 "physical_block_size": 4096, 00:15:07.447 "uuid": "1a0aef42-be67-4545-bb91-3b4ad710b2b1", 00:15:07.447 "optimal_io_boundary": 0, 00:15:07.447 "md_size": 0, 00:15:07.447 "dif_type": 0, 00:15:07.447 "dif_is_head_of_md": false, 00:15:07.447 "dif_pi_format": 0 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "bdev_wait_for_examine" 00:15:07.447 } 00:15:07.447 ] 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "subsystem": "nbd", 00:15:07.447 "config": [] 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "subsystem": "scheduler", 00:15:07.447 "config": [ 00:15:07.447 { 00:15:07.447 "method": "framework_set_scheduler", 00:15:07.447 "params": { 00:15:07.447 "name": "static" 00:15:07.447 } 00:15:07.447 } 00:15:07.447 ] 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "subsystem": "nvmf", 00:15:07.447 "config": [ 00:15:07.447 { 00:15:07.447 "method": "nvmf_set_config", 00:15:07.447 "params": { 00:15:07.447 "discovery_filter": "match_any", 00:15:07.447 "admin_cmd_passthru": { 00:15:07.447 "identify_ctrlr": false 00:15:07.447 }, 00:15:07.447 "dhchap_digests": [ 00:15:07.447 "sha256", 00:15:07.447 "sha384", 00:15:07.447 "sha512" 00:15:07.447 ], 00:15:07.447 "dhchap_dhgroups": [ 00:15:07.447 "null", 00:15:07.447 "ffdhe2048", 00:15:07.447 "ffdhe3072", 00:15:07.447 "ffdhe4096", 00:15:07.447 "ffdhe6144", 00:15:07.447 "ffdhe8192" 00:15:07.447 ] 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "nvmf_set_max_subsystems", 00:15:07.447 "params": { 00:15:07.447 "max_subsystems": 1024 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "nvmf_set_crdt", 00:15:07.447 "params": { 00:15:07.447 "crdt1": 0, 00:15:07.447 "crdt2": 0, 00:15:07.447 "crdt3": 0 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "nvmf_create_transport", 00:15:07.447 "params": { 00:15:07.447 "trtype": "TCP", 00:15:07.447 "max_queue_depth": 128, 00:15:07.447 "max_io_qpairs_per_ctrlr": 127, 00:15:07.447 "in_capsule_data_size": 4096, 00:15:07.447 "max_io_size": 131072, 00:15:07.447 "io_unit_size": 131072, 00:15:07.447 "max_aq_depth": 128, 00:15:07.447 "num_shared_buffers": 511, 00:15:07.447 "buf_cache_size": 4294967295, 00:15:07.447 "dif_insert_or_strip": false, 00:15:07.447 "zcopy": false, 00:15:07.447 "c2h_success": false, 00:15:07.447 "sock_priority": 0, 00:15:07.447 "abort_timeout_sec": 1, 00:15:07.447 "ack_timeout": 0, 00:15:07.447 "data_wr_pool_size": 0 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "nvmf_create_subsystem", 00:15:07.447 "params": { 00:15:07.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.447 "allow_any_host": false, 00:15:07.447 "serial_number": "SPDK00000000000001", 00:15:07.447 "model_number": "SPDK bdev Controller", 00:15:07.447 "max_namespaces": 10, 00:15:07.447 "min_cntlid": 1, 00:15:07.447 "max_cntlid": 65519, 00:15:07.447 "ana_reporting": false 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "nvmf_subsystem_add_host", 00:15:07.447 "params": { 00:15:07.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.447 "host": "nqn.2016-06.io.spdk:host1", 00:15:07.447 "psk": "key0" 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "nvmf_subsystem_add_ns", 00:15:07.447 "params": { 00:15:07.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.447 "namespace": { 00:15:07.447 "nsid": 1, 00:15:07.447 "bdev_name": "malloc0", 00:15:07.447 "nguid": "1A0AEF42BE674545BB913B4AD710B2B1", 00:15:07.447 "uuid": "1a0aef42-be67-4545-bb91-3b4ad710b2b1", 00:15:07.447 "no_auto_visible": false 00:15:07.447 } 00:15:07.447 } 00:15:07.447 }, 00:15:07.447 { 00:15:07.447 "method": "nvmf_subsystem_add_listener", 00:15:07.447 "params": { 00:15:07.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.447 "listen_address": { 00:15:07.447 "trtype": "TCP", 00:15:07.447 "adrfam": "IPv4", 00:15:07.447 "traddr": "10.0.0.3", 00:15:07.447 "trsvcid": "4420" 00:15:07.447 }, 00:15:07.447 "secure_channel": true 00:15:07.447 } 00:15:07.447 } 00:15:07.447 ] 00:15:07.447 } 00:15:07.447 ] 00:15:07.447 }' 00:15:07.447 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:07.707 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:07.707 "subsystems": [ 00:15:07.707 { 00:15:07.707 "subsystem": "keyring", 00:15:07.707 "config": [ 00:15:07.707 { 00:15:07.707 "method": "keyring_file_add_key", 00:15:07.707 "params": { 00:15:07.707 "name": "key0", 00:15:07.707 "path": "/tmp/tmp.oJIX4EO56n" 00:15:07.707 } 00:15:07.707 } 00:15:07.707 ] 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "subsystem": "iobuf", 00:15:07.707 "config": [ 00:15:07.707 { 00:15:07.707 "method": "iobuf_set_options", 00:15:07.707 "params": { 00:15:07.707 "small_pool_count": 8192, 00:15:07.707 "large_pool_count": 1024, 00:15:07.707 "small_bufsize": 8192, 00:15:07.707 "large_bufsize": 135168 00:15:07.707 } 00:15:07.707 } 00:15:07.707 ] 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "subsystem": "sock", 00:15:07.707 "config": [ 00:15:07.707 { 00:15:07.707 "method": "sock_set_default_impl", 00:15:07.707 "params": { 00:15:07.707 "impl_name": "uring" 00:15:07.707 } 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "method": "sock_impl_set_options", 00:15:07.707 "params": { 00:15:07.707 "impl_name": "ssl", 00:15:07.707 "recv_buf_size": 4096, 00:15:07.707 "send_buf_size": 4096, 00:15:07.707 "enable_recv_pipe": true, 00:15:07.707 "enable_quickack": false, 00:15:07.707 "enable_placement_id": 0, 00:15:07.707 "enable_zerocopy_send_server": true, 00:15:07.707 "enable_zerocopy_send_client": false, 00:15:07.707 "zerocopy_threshold": 0, 00:15:07.707 "tls_version": 0, 00:15:07.707 "enable_ktls": false 00:15:07.707 } 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "method": "sock_impl_set_options", 00:15:07.707 "params": { 00:15:07.707 "impl_name": "posix", 00:15:07.707 "recv_buf_size": 2097152, 00:15:07.707 "send_buf_size": 2097152, 00:15:07.707 "enable_recv_pipe": true, 00:15:07.707 "enable_quickack": false, 00:15:07.707 "enable_placement_id": 0, 00:15:07.707 "enable_zerocopy_send_server": true, 00:15:07.707 "enable_zerocopy_send_client": false, 00:15:07.707 "zerocopy_threshold": 0, 00:15:07.707 "tls_version": 0, 00:15:07.707 "enable_ktls": false 00:15:07.707 } 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "method": "sock_impl_set_options", 00:15:07.707 "params": { 00:15:07.707 "impl_name": "uring", 00:15:07.707 "recv_buf_size": 2097152, 00:15:07.707 "send_buf_size": 2097152, 00:15:07.707 "enable_recv_pipe": true, 00:15:07.707 "enable_quickack": false, 00:15:07.707 "enable_placement_id": 0, 00:15:07.707 "enable_zerocopy_send_server": false, 00:15:07.707 "enable_zerocopy_send_client": false, 00:15:07.707 "zerocopy_threshold": 0, 00:15:07.707 "tls_version": 0, 00:15:07.707 "enable_ktls": false 00:15:07.707 } 00:15:07.707 } 00:15:07.707 ] 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "subsystem": "vmd", 00:15:07.707 "config": [] 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "subsystem": "accel", 00:15:07.707 "config": [ 00:15:07.707 { 00:15:07.707 "method": "accel_set_options", 00:15:07.707 "params": { 00:15:07.707 "small_cache_size": 128, 00:15:07.707 "large_cache_size": 16, 00:15:07.707 "task_count": 2048, 00:15:07.707 "sequence_count": 2048, 00:15:07.707 "buf_count": 2048 00:15:07.707 } 00:15:07.707 } 00:15:07.707 ] 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "subsystem": "bdev", 00:15:07.707 "config": [ 00:15:07.707 { 00:15:07.707 "method": "bdev_set_options", 00:15:07.707 "params": { 00:15:07.707 "bdev_io_pool_size": 65535, 00:15:07.707 "bdev_io_cache_size": 256, 00:15:07.707 "bdev_auto_examine": true, 00:15:07.707 "iobuf_small_cache_size": 128, 00:15:07.707 "iobuf_large_cache_size": 16 00:15:07.707 } 00:15:07.707 }, 00:15:07.707 { 00:15:07.707 "method": "bdev_raid_set_options", 00:15:07.707 "params": { 00:15:07.707 "process_window_size_kb": 1024, 00:15:07.708 "process_max_bandwidth_mb_sec": 0 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_iscsi_set_options", 00:15:07.708 "params": { 00:15:07.708 "timeout_sec": 30 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_nvme_set_options", 00:15:07.708 "params": { 00:15:07.708 "action_on_timeout": "none", 00:15:07.708 "timeout_us": 0, 00:15:07.708 "timeout_admin_us": 0, 00:15:07.708 "keep_alive_timeout_ms": 10000, 00:15:07.708 "arbitration_burst": 0, 00:15:07.708 "low_priority_weight": 0, 00:15:07.708 "medium_priority_weight": 0, 00:15:07.708 "high_priority_weight": 0, 00:15:07.708 "nvme_adminq_poll_period_us": 10000, 00:15:07.708 "nvme_ioq_poll_period_us": 0, 00:15:07.708 "io_queue_requests": 512, 00:15:07.708 "delay_cmd_submit": true, 00:15:07.708 "transport_retry_count": 4, 00:15:07.708 "bdev_retry_count": 3, 00:15:07.708 "transport_ack_timeout": 0, 00:15:07.708 "ctrlr_loss_timeout_sec": 0, 00:15:07.708 "reconnect_delay_sec": 0, 00:15:07.708 "fast_io_fail_timeout_sec": 0, 00:15:07.708 "disable_auto_failback": false, 00:15:07.708 "generate_uuids": false, 00:15:07.708 "transport_tos": 0, 00:15:07.708 "nvme_error_stat": false, 00:15:07.708 "rdma_srq_size": 0, 00:15:07.708 "io_path_stat": false, 00:15:07.708 "allow_accel_sequence": false, 00:15:07.708 "rdma_max_cq_size": 0, 00:15:07.708 "rdma_cm_event_timeout_ms": 0, 00:15:07.708 "dhchap_digests": [ 00:15:07.708 "sha256", 00:15:07.708 "sha384", 00:15:07.708 "sha512" 00:15:07.708 ], 00:15:07.708 "dhchap_dhgroups": [ 00:15:07.708 "null", 00:15:07.708 "ffdhe2048", 00:15:07.708 "ffdhe3072", 00:15:07.708 "ffdhe4096", 00:15:07.708 "ffdhe6144", 00:15:07.708 "ffdhe8192" 00:15:07.708 ] 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_nvme_attach_controller", 00:15:07.708 "params": { 00:15:07.708 "name": "TLSTEST", 00:15:07.708 "trtype": "TCP", 00:15:07.708 "adrfam": "IPv4", 00:15:07.708 "traddr": "10.0.0.3", 00:15:07.708 "trsvcid": "4420", 00:15:07.708 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.708 "prchk_reftag": false, 00:15:07.708 "prchk_guard": false, 00:15:07.708 "ctrlr_loss_timeout_sec": 0, 00:15:07.708 "reconnect_delay_sec": 0, 00:15:07.708 "fast_io_fail_timeout_sec": 0, 00:15:07.708 "psk": "key0", 00:15:07.708 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.708 "hdgst": false, 00:15:07.708 "ddgst": false 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_nvme_set_hotplug", 00:15:07.708 "params": { 00:15:07.708 "period_us": 100000, 00:15:07.708 "enable": false 00:15:07.708 } 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "method": "bdev_wait_for_examine" 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }, 00:15:07.708 { 00:15:07.708 "subsystem": "nbd", 00:15:07.708 "config": [] 00:15:07.708 } 00:15:07.708 ] 00:15:07.708 }' 00:15:07.708 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83838 00:15:07.708 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83838 ']' 00:15:07.708 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83838 00:15:07.708 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.708 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.708 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83838 00:15:07.967 killing process with pid 83838 00:15:07.967 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.967 00:15:07.967 Latency(us) 00:15:07.967 [2024-12-07T22:46:22.733Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.967 [2024-12-07T22:46:22.733Z] =================================================================================================================== 00:15:07.967 [2024-12-07T22:46:22.733Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83838' 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83838 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83838 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83789 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83789 ']' 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83789 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83789 00:15:07.967 killing process with pid 83789 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83789' 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83789 00:15:07.967 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83789 00:15:08.227 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:08.227 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:08.227 "subsystems": [ 00:15:08.227 { 00:15:08.227 "subsystem": "keyring", 00:15:08.227 "config": [ 00:15:08.227 { 00:15:08.227 "method": "keyring_file_add_key", 00:15:08.227 "params": { 00:15:08.227 "name": "key0", 00:15:08.227 "path": "/tmp/tmp.oJIX4EO56n" 00:15:08.227 } 00:15:08.227 } 00:15:08.227 ] 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "subsystem": "iobuf", 00:15:08.227 "config": [ 00:15:08.227 { 00:15:08.227 "method": "iobuf_set_options", 00:15:08.227 "params": { 00:15:08.227 "small_pool_count": 8192, 00:15:08.227 "large_pool_count": 1024, 00:15:08.227 "small_bufsize": 8192, 00:15:08.227 "large_bufsize": 135168 00:15:08.227 } 00:15:08.227 } 00:15:08.227 ] 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "subsystem": "sock", 00:15:08.227 "config": [ 00:15:08.227 { 00:15:08.227 "method": "sock_set_default_impl", 00:15:08.227 "params": { 00:15:08.227 "impl_name": "uring" 00:15:08.227 } 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "method": "sock_impl_set_options", 00:15:08.227 "params": { 00:15:08.227 "impl_name": "ssl", 00:15:08.227 "recv_buf_size": 4096, 00:15:08.227 "send_buf_size": 4096, 00:15:08.227 "enable_recv_pipe": true, 00:15:08.227 "enable_quickack": false, 00:15:08.227 "enable_placement_id": 0, 00:15:08.227 "enable_zerocopy_send_server": true, 00:15:08.227 "enable_zerocopy_send_client": false, 00:15:08.227 "zerocopy_threshold": 0, 00:15:08.227 "tls_version": 0, 00:15:08.227 "enable_ktls": false 00:15:08.227 } 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "method": "sock_impl_set_options", 00:15:08.227 "params": { 00:15:08.227 "impl_name": "posix", 00:15:08.227 "recv_buf_size": 2097152, 00:15:08.227 "send_buf_size": 2097152, 00:15:08.227 "enable_recv_pipe": true, 00:15:08.227 "enable_quickack": false, 00:15:08.227 "enable_placement_id": 0, 00:15:08.227 "enable_zerocopy_send_server": true, 00:15:08.227 "enable_zerocopy_send_client": false, 00:15:08.227 "zerocopy_threshold": 0, 00:15:08.227 "tls_version": 0, 00:15:08.227 "enable_ktls": false 00:15:08.227 } 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "method": "sock_impl_set_options", 00:15:08.227 "params": { 00:15:08.227 "impl_name": "uring", 00:15:08.227 "recv_buf_size": 2097152, 00:15:08.227 "send_buf_size": 2097152, 00:15:08.227 "enable_recv_pipe": true, 00:15:08.227 "enable_quickack": false, 00:15:08.227 "enable_placement_id": 0, 00:15:08.227 "enable_zerocopy_send_server": false, 00:15:08.227 "enable_zerocopy_send_client": false, 00:15:08.227 "zerocopy_threshold": 0, 00:15:08.227 "tls_version": 0, 00:15:08.227 "enable_ktls": false 00:15:08.227 } 00:15:08.227 } 00:15:08.227 ] 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "subsystem": "vmd", 00:15:08.227 "config": [] 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "subsystem": "accel", 00:15:08.227 "config": [ 00:15:08.227 { 00:15:08.227 "method": "accel_set_options", 00:15:08.227 "params": { 00:15:08.227 "small_cache_size": 128, 00:15:08.227 "large_cache_size": 16, 00:15:08.227 "task_count": 2048, 00:15:08.227 "sequence_count": 2048, 00:15:08.227 "buf_count": 2048 00:15:08.227 } 00:15:08.227 } 00:15:08.227 ] 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "subsystem": "bdev", 00:15:08.227 "config": [ 00:15:08.227 { 00:15:08.227 "method": "bdev_set_options", 00:15:08.227 "params": { 00:15:08.227 "bdev_io_pool_size": 65535, 00:15:08.227 "bdev_io_cache_size": 256, 00:15:08.227 "bdev_auto_examine": true, 00:15:08.227 "iobuf_small_cache_size": 128, 00:15:08.227 "iobuf_large_cache_size": 16 00:15:08.227 } 00:15:08.227 }, 00:15:08.227 { 00:15:08.227 "method": "bdev_raid_set_options", 00:15:08.227 "params": { 00:15:08.227 "process_window_size_kb": 1024, 00:15:08.227 "process_max_bandwidth_mb_sec": 0 00:15:08.227 } 00:15:08.227 }, 00:15:08.227 { 00:15:08.228 "method": "bdev_iscsi_set_options", 00:15:08.228 "params": { 00:15:08.228 "timeout_sec": 30 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "bdev_nvme_set_options", 00:15:08.228 "params": { 00:15:08.228 "action_on_timeout": "none", 00:15:08.228 "timeout_us": 0, 00:15:08.228 "timeout_admin_us": 0, 00:15:08.228 "keep_alive_timeout_ms": 10000, 00:15:08.228 "arbitration_burst": 0, 00:15:08.228 "low_priority_weight": 0, 00:15:08.228 "medium_priority_weight": 0, 00:15:08.228 "high_priority_weight": 0, 00:15:08.228 "nvme_adminq_poll_period_us": 10000, 00:15:08.228 "nvme_ioq_poll_period_us": 0, 00:15:08.228 "io_queue_requests": 0, 00:15:08.228 "delay_cmd_submit": true, 00:15:08.228 "transport_retry_count": 4, 00:15:08.228 "bdev_retry_count": 3, 00:15:08.228 "transport_ack_timeout": 0, 00:15:08.228 "ctrlr_loss_timeout_sec": 0, 00:15:08.228 "reconnect_delay_sec": 0, 00:15:08.228 "fast_io_fail_timeout_sec": 0, 00:15:08.228 "disable_auto_failback": false, 00:15:08.228 "generate_uuids": false, 00:15:08.228 "transport_tos": 0, 00:15:08.228 "nvme_error_stat": false, 00:15:08.228 "rdma_srq_size": 0, 00:15:08.228 "io_path_stat": false, 00:15:08.228 "allow_accel_sequence": false, 00:15:08.228 "rdma_max_cq_size": 0, 00:15:08.228 "rdma_cm_event_timeout_ms": 0, 00:15:08.228 "dhchap_digests": [ 00:15:08.228 "sha256", 00:15:08.228 "sha384", 00:15:08.228 "sha512" 00:15:08.228 ], 00:15:08.228 "dhchap_dhgroups": [ 00:15:08.228 "null", 00:15:08.228 "ffdhe2048", 00:15:08.228 "ffdhe3072", 00:15:08.228 "ffdhe4096", 00:15:08.228 "ffdhe6144", 00:15:08.228 "ffdhe8192" 00:15:08.228 ] 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "bdev_nvme_set_hotplug", 00:15:08.228 "params": { 00:15:08.228 "period_us": 100000, 00:15:08.228 "enable": false 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "bdev_malloc_create", 00:15:08.228 "params": { 00:15:08.228 "name": "malloc0", 00:15:08.228 "num_blocks": 8192, 00:15:08.228 "block_size": 4096, 00:15:08.228 "physical_block_size": 4096, 00:15:08.228 "uuid": "1a0aef42-be67-4545-bb91-3b4ad710b2b1", 00:15:08.228 "optimal_io_boundary": 0, 00:15:08.228 "md_size": 0, 00:15:08.228 "dif_type": 0, 00:15:08.228 "dif_is_head_of_md": false, 00:15:08.228 "dif_pi_format": 0 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "bdev_wait_for_examine" 00:15:08.228 } 00:15:08.228 ] 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "subsystem": "nbd", 00:15:08.228 "config": [] 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "subsystem": "scheduler", 00:15:08.228 "config": [ 00:15:08.228 { 00:15:08.228 "method": "framework_set_scheduler", 00:15:08.228 "params": { 00:15:08.228 "name": "static" 00:15:08.228 } 00:15:08.228 } 00:15:08.228 ] 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "subsystem": "nvmf", 00:15:08.228 "config": [ 00:15:08.228 { 00:15:08.228 "method": "nvmf_set_config", 00:15:08.228 "params": { 00:15:08.228 "discovery_filter": "match_any", 00:15:08.228 "admin_cmd_passthru": { 00:15:08.228 "identify_ctrlr": false 00:15:08.228 }, 00:15:08.228 "dhchap_digests": [ 00:15:08.228 "sha256", 00:15:08.228 "sha384", 00:15:08.228 "sha512" 00:15:08.228 ], 00:15:08.228 "dhchap_dhgroups": [ 00:15:08.228 "null", 00:15:08.228 "ffdhe2048", 00:15:08.228 "ffdhe3072", 00:15:08.228 "ffdhe4096", 00:15:08.228 "ffdhe6144", 00:15:08.228 "ffdhe8192" 00:15:08.228 ] 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "nvmf_set_max_subsystems", 00:15:08.228 "params": { 00:15:08.228 "max_subsystems": 1024 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "nvmf_set_crdt", 00:15:08.228 "params": { 00:15:08.228 "crdt1": 0, 00:15:08.228 "crdt2": 0, 00:15:08.228 "crdt3": 0 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "nvmf_create_transport", 00:15:08.228 "params": { 00:15:08.228 "trtype": "TCP", 00:15:08.228 "max_queue_depth": 128, 00:15:08.228 "max_io_qpairs_per_ctrlr": 127, 00:15:08.228 "in_capsule_data_size": 4096, 00:15:08.228 "max_io_size": 131072, 00:15:08.228 "io_unit_size": 131072, 00:15:08.228 "max_aq_depth": 128, 00:15:08.228 "num_shared_buffers": 511, 00:15:08.228 "buf_cache_size": 4294967295, 00:15:08.228 "dif_insert_or_strip": false, 00:15:08.228 "zcopy": false, 00:15:08.228 "c2h_success": false, 00:15:08.228 "sock_priority": 0, 00:15:08.228 "abort_timeout_sec": 1, 00:15:08.228 "ack_timeout": 0, 00:15:08.228 "data_wr_pool_size": 0 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "nvmf_create_subsystem", 00:15:08.228 "params": { 00:15:08.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.228 "allow_any_host": false, 00:15:08.228 "serial_number": "SPDK00000000000001", 00:15:08.228 "model_number": "SPDK bdev Controller", 00:15:08.228 "max_namespaces": 10, 00:15:08.228 "min_cntlid": 1, 00:15:08.228 "max_cntlid": 65519, 00:15:08.228 "ana_reporting": false 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "nvmf_subsystem_add_host", 00:15:08.228 "params": { 00:15:08.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.228 "host": "nqn.2016-06.io.spdk:host1", 00:15:08.228 "psk": "key0" 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "nvmf_subsystem_add_ns", 00:15:08.228 "params": { 00:15:08.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.228 "namespace": { 00:15:08.228 "nsid": 1, 00:15:08.228 "bdev_name": "malloc0", 00:15:08.228 "nguid": "1A0AEF42BE674545BB913B4AD710B2B1", 00:15:08.228 "uuid": "1a0aef42-be67-4545-bb91-3b4ad710b2b1", 00:15:08.228 "no_auto_visible": false 00:15:08.228 } 00:15:08.228 } 00:15:08.228 }, 00:15:08.228 { 00:15:08.228 "method": "nvmf_subsystem_add_listener", 00:15:08.228 "params": { 00:15:08.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:08.228 "listen_address": { 00:15:08.228 "trtype": "TCP", 00:15:08.228 "adrfam": "IPv4", 00:15:08.228 "traddr": "10.0.0.3", 00:15:08.228 "trsvcid": "4420" 00:15:08.228 }, 00:15:08.228 "secure_channel": true 00:15:08.228 } 00:15:08.228 } 00:15:08.228 ] 00:15:08.228 } 00:15:08.228 ] 00:15:08.228 }' 00:15:08.228 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:08.228 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:08.228 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.228 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83880 00:15:08.228 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83880 00:15:08.229 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:08.229 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83880 ']' 00:15:08.229 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.229 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:08.229 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.229 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:08.229 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.229 [2024-12-07 22:46:22.870792] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:08.229 [2024-12-07 22:46:22.871172] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.488 [2024-12-07 22:46:23.012102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.488 [2024-12-07 22:46:23.047771] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.488 [2024-12-07 22:46:23.047815] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.488 [2024-12-07 22:46:23.047842] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.488 [2024-12-07 22:46:23.047849] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.488 [2024-12-07 22:46:23.047855] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.488 [2024-12-07 22:46:23.047963] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.488 [2024-12-07 22:46:23.191444] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.488 [2024-12-07 22:46:23.246729] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.765 [2024-12-07 22:46:23.289063] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:08.765 [2024-12-07 22:46:23.289295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83912 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83912 /var/tmp/bdevperf.sock 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83912 ']' 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:09.331 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:09.331 "subsystems": [ 00:15:09.331 { 00:15:09.331 "subsystem": "keyring", 00:15:09.331 "config": [ 00:15:09.331 { 00:15:09.331 "method": "keyring_file_add_key", 00:15:09.331 "params": { 00:15:09.331 "name": "key0", 00:15:09.331 "path": "/tmp/tmp.oJIX4EO56n" 00:15:09.331 } 00:15:09.331 } 00:15:09.331 ] 00:15:09.331 }, 00:15:09.331 { 00:15:09.331 "subsystem": "iobuf", 00:15:09.331 "config": [ 00:15:09.331 { 00:15:09.332 "method": "iobuf_set_options", 00:15:09.332 "params": { 00:15:09.332 "small_pool_count": 8192, 00:15:09.332 "large_pool_count": 1024, 00:15:09.332 "small_bufsize": 8192, 00:15:09.332 "large_bufsize": 135168 00:15:09.332 } 00:15:09.332 } 00:15:09.332 ] 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "subsystem": "sock", 00:15:09.332 "config": [ 00:15:09.332 { 00:15:09.332 "method": "sock_set_default_impl", 00:15:09.332 "params": { 00:15:09.332 "impl_name": "uring" 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "sock_impl_set_options", 00:15:09.332 "params": { 00:15:09.332 "impl_name": "ssl", 00:15:09.332 "recv_buf_size": 4096, 00:15:09.332 "send_buf_size": 4096, 00:15:09.332 "enable_recv_pipe": true, 00:15:09.332 "enable_quickack": false, 00:15:09.332 "enable_placement_id": 0, 00:15:09.332 "enable_zerocopy_send_server": true, 00:15:09.332 "enable_zerocopy_send_client": false, 00:15:09.332 "zerocopy_threshold": 0, 00:15:09.332 "tls_version": 0, 00:15:09.332 "enable_ktls": false 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "sock_impl_set_options", 00:15:09.332 "params": { 00:15:09.332 "impl_name": "posix", 00:15:09.332 "recv_buf_size": 2097152, 00:15:09.332 "send_buf_size": 2097152, 00:15:09.332 "enable_recv_pipe": true, 00:15:09.332 "enable_quickack": false, 00:15:09.332 "enable_placement_id": 0, 00:15:09.332 "enable_zerocopy_send_server": true, 00:15:09.332 "enable_zerocopy_send_client": false, 00:15:09.332 "zerocopy_threshold": 0, 00:15:09.332 "tls_version": 0, 00:15:09.332 "enable_ktls": false 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "sock_impl_set_options", 00:15:09.332 "params": { 00:15:09.332 "impl_name": "uring", 00:15:09.332 "recv_buf_size": 2097152, 00:15:09.332 "send_buf_size": 2097152, 00:15:09.332 "enable_recv_pipe": true, 00:15:09.332 "enable_quickack": false, 00:15:09.332 "enable_placement_id": 0, 00:15:09.332 "enable_zerocopy_send_server": false, 00:15:09.332 "enable_zerocopy_send_client": false, 00:15:09.332 "zerocopy_threshold": 0, 00:15:09.332 "tls_version": 0, 00:15:09.332 "enable_ktls": false 00:15:09.332 } 00:15:09.332 } 00:15:09.332 ] 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "subsystem": "vmd", 00:15:09.332 "config": [] 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "subsystem": "accel", 00:15:09.332 "config": [ 00:15:09.332 { 00:15:09.332 "method": "accel_set_options", 00:15:09.332 "params": { 00:15:09.332 "small_cache_size": 128, 00:15:09.332 "large_cache_size": 16, 00:15:09.332 "task_count": 2048, 00:15:09.332 "sequence_count": 2048, 00:15:09.332 "buf_count": 2048 00:15:09.332 } 00:15:09.332 } 00:15:09.332 ] 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "subsystem": "bdev", 00:15:09.332 "config": [ 00:15:09.332 { 00:15:09.332 "method": "bdev_set_options", 00:15:09.332 "params": { 00:15:09.332 "bdev_io_pool_size": 65535, 00:15:09.332 "bdev_io_cache_size": 256, 00:15:09.332 "bdev_auto_examine": true, 00:15:09.332 "iobuf_small_cache_size": 128, 00:15:09.332 "iobuf_large_cache_size": 16 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "bdev_raid_set_options", 00:15:09.332 "params": { 00:15:09.332 "process_window_size_kb": 1024, 00:15:09.332 "process_max_bandwidth_mb_sec": 0 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "bdev_iscsi_set_options", 00:15:09.332 "params": { 00:15:09.332 "timeout_sec": 30 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "bdev_nvme_set_options", 00:15:09.332 "params": { 00:15:09.332 "action_on_timeout": "none", 00:15:09.332 "timeout_us": 0, 00:15:09.332 "timeout_admin_us": 0, 00:15:09.332 "keep_alive_timeout_ms": 10000, 00:15:09.332 "arbitration_burst": 0, 00:15:09.332 "low_priority_weight": 0, 00:15:09.332 "medium_priority_weight": 0, 00:15:09.332 "high_priority_weight": 0, 00:15:09.332 "nvme_adminq_poll_period_us": 10000, 00:15:09.332 "nvme_ioq_poll_period_us": 0, 00:15:09.332 "io_queue_requests": 512, 00:15:09.332 "delay_cmd_submit": true, 00:15:09.332 "transport_retry_count": 4, 00:15:09.332 "bdev_retry_count": 3, 00:15:09.332 "transport_ack_timeout": 0, 00:15:09.332 "ctrlr_loss_timeout_sec": 0, 00:15:09.332 "reconnect_delay_sec": 0, 00:15:09.332 "fast_io_fail_timeout_sec": 0, 00:15:09.332 "disable_auto_failback": false, 00:15:09.332 "generate_uuids": false, 00:15:09.332 "transport_tos": 0, 00:15:09.332 "nvme_error_stat": false, 00:15:09.332 "rdma_srq_size": 0, 00:15:09.332 "io_path_stat": false, 00:15:09.332 "allow_accel_sequence": false, 00:15:09.332 "rdma_max_cq_size": 0, 00:15:09.332 "rdma_cm_event_timeout_ms": 0, 00:15:09.332 "dhchap_digests": [ 00:15:09.332 "sha256", 00:15:09.332 "sha384", 00:15:09.332 "sha512" 00:15:09.332 ], 00:15:09.332 "dhchap_dhgroups": [ 00:15:09.332 "null", 00:15:09.332 "ffdhe2048", 00:15:09.332 "ffdhe3072", 00:15:09.332 "ffdhe4096", 00:15:09.332 "ffdhe6144", 00:15:09.332 "ffdhe8192" 00:15:09.332 ] 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "bdev_nvme_attach_controller", 00:15:09.332 "params": { 00:15:09.332 "name": "TLSTEST", 00:15:09.332 "trtype": "TCP", 00:15:09.332 "adrfam": "IPv4", 00:15:09.332 "traddr": "10.0.0.3", 00:15:09.332 "trsvcid": "4420", 00:15:09.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.332 "prchk_reftag": false, 00:15:09.332 "prchk_guard": false, 00:15:09.332 "ctrlr_loss_timeout_sec": 0, 00:15:09.332 "reconnect_delay_sec": 0, 00:15:09.332 "fast_io_fail_timeout_sec": 0, 00:15:09.332 "psk": "key0", 00:15:09.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.332 "hdgst": false, 00:15:09.332 "ddgst": false 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "bdev_nvme_set_hotplug", 00:15:09.332 "params": { 00:15:09.332 "period_us": 100000, 00:15:09.332 "enable": false 00:15:09.332 } 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "method": "bdev_wait_for_examine" 00:15:09.332 } 00:15:09.332 ] 00:15:09.332 }, 00:15:09.332 { 00:15:09.332 "subsystem": "nbd", 00:15:09.332 "config": [] 00:15:09.332 } 00:15:09.332 ] 00:15:09.332 }' 00:15:09.332 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:09.332 [2024-12-07 22:46:23.999336] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:09.332 [2024-12-07 22:46:23.999676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83912 ] 00:15:09.591 [2024-12-07 22:46:24.142123] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:09.591 [2024-12-07 22:46:24.186431] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.591 [2024-12-07 22:46:24.303331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:09.591 [2024-12-07 22:46:24.336157] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:10.525 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.525 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:10.525 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:10.525 Running I/O for 10 seconds... 00:15:12.435 4027.00 IOPS, 15.73 MiB/s [2024-12-07T22:46:28.580Z] 4071.00 IOPS, 15.90 MiB/s [2024-12-07T22:46:29.150Z] 4078.67 IOPS, 15.93 MiB/s [2024-12-07T22:46:30.527Z] 4080.25 IOPS, 15.94 MiB/s [2024-12-07T22:46:31.464Z] 4102.60 IOPS, 16.03 MiB/s [2024-12-07T22:46:32.402Z] 4217.33 IOPS, 16.47 MiB/s [2024-12-07T22:46:33.339Z] 4303.43 IOPS, 16.81 MiB/s [2024-12-07T22:46:34.278Z] 4366.88 IOPS, 17.06 MiB/s [2024-12-07T22:46:35.248Z] 4417.67 IOPS, 17.26 MiB/s [2024-12-07T22:46:35.248Z] 4458.90 IOPS, 17.42 MiB/s 00:15:20.482 Latency(us) 00:15:20.482 [2024-12-07T22:46:35.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.482 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:20.482 Verification LBA range: start 0x0 length 0x2000 00:15:20.482 TLSTESTn1 : 10.01 4465.33 17.44 0.00 0.00 28617.15 4081.11 25261.15 00:15:20.482 [2024-12-07T22:46:35.248Z] =================================================================================================================== 00:15:20.482 [2024-12-07T22:46:35.248Z] Total : 4465.33 17.44 0.00 0.00 28617.15 4081.11 25261.15 00:15:20.482 { 00:15:20.482 "results": [ 00:15:20.482 { 00:15:20.482 "job": "TLSTESTn1", 00:15:20.482 "core_mask": "0x4", 00:15:20.482 "workload": "verify", 00:15:20.482 "status": "finished", 00:15:20.482 "verify_range": { 00:15:20.482 "start": 0, 00:15:20.482 "length": 8192 00:15:20.482 }, 00:15:20.482 "queue_depth": 128, 00:15:20.482 "io_size": 4096, 00:15:20.482 "runtime": 10.013807, 00:15:20.482 "iops": 4465.334712362641, 00:15:20.482 "mibps": 17.442713720166566, 00:15:20.482 "io_failed": 0, 00:15:20.482 "io_timeout": 0, 00:15:20.482 "avg_latency_us": 28617.14583570695, 00:15:20.482 "min_latency_us": 4081.1054545454544, 00:15:20.482 "max_latency_us": 25261.14909090909 00:15:20.482 } 00:15:20.482 ], 00:15:20.482 "core_count": 1 00:15:20.482 } 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83912 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83912 ']' 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83912 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83912 00:15:20.482 killing process with pid 83912 00:15:20.482 Received shutdown signal, test time was about 10.000000 seconds 00:15:20.482 00:15:20.482 Latency(us) 00:15:20.482 [2024-12-07T22:46:35.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.482 [2024-12-07T22:46:35.248Z] =================================================================================================================== 00:15:20.482 [2024-12-07T22:46:35.248Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83912' 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83912 00:15:20.482 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83912 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83880 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83880 ']' 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83880 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83880 00:15:20.754 killing process with pid 83880 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83880' 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83880 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83880 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84051 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84051 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84051 ']' 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.754 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.012 [2024-12-07 22:46:35.566242] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:21.012 [2024-12-07 22:46:35.566339] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:21.012 [2024-12-07 22:46:35.706984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.012 [2024-12-07 22:46:35.748490] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:21.012 [2024-12-07 22:46:35.748795] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:21.012 [2024-12-07 22:46:35.748821] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:21.012 [2024-12-07 22:46:35.748832] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:21.012 [2024-12-07 22:46:35.748842] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:21.012 [2024-12-07 22:46:35.748896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.271 [2024-12-07 22:46:35.783817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.oJIX4EO56n 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.oJIX4EO56n 00:15:21.271 22:46:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:21.529 [2024-12-07 22:46:36.131107] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:21.529 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:21.788 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:22.046 [2024-12-07 22:46:36.623183] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:22.046 [2024-12-07 22:46:36.623398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:22.046 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:22.304 malloc0 00:15:22.304 22:46:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:22.575 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:22.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84099 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84099 /var/tmp/bdevperf.sock 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84099 ']' 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.833 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.091 [2024-12-07 22:46:37.633046] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:23.091 [2024-12-07 22:46:37.633305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84099 ] 00:15:23.091 [2024-12-07 22:46:37.767762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.091 [2024-12-07 22:46:37.808414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.091 [2024-12-07 22:46:37.840740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:23.349 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.349 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:23.349 22:46:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:15:23.349 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:23.607 [2024-12-07 22:46:38.298724] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:23.607 nvme0n1 00:15:23.865 22:46:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:23.865 Running I/O for 1 seconds... 00:15:24.798 4760.00 IOPS, 18.59 MiB/s 00:15:24.798 Latency(us) 00:15:24.798 [2024-12-07T22:46:39.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.798 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:24.798 Verification LBA range: start 0x0 length 0x2000 00:15:24.798 nvme0n1 : 1.02 4803.92 18.77 0.00 0.00 26330.96 2383.13 19184.17 00:15:24.798 [2024-12-07T22:46:39.564Z] =================================================================================================================== 00:15:24.798 [2024-12-07T22:46:39.564Z] Total : 4803.92 18.77 0.00 0.00 26330.96 2383.13 19184.17 00:15:24.798 { 00:15:24.798 "results": [ 00:15:24.798 { 00:15:24.798 "job": "nvme0n1", 00:15:24.798 "core_mask": "0x2", 00:15:24.798 "workload": "verify", 00:15:24.798 "status": "finished", 00:15:24.798 "verify_range": { 00:15:24.798 "start": 0, 00:15:24.798 "length": 8192 00:15:24.798 }, 00:15:24.798 "queue_depth": 128, 00:15:24.798 "io_size": 4096, 00:15:24.798 "runtime": 1.017503, 00:15:24.798 "iops": 4803.917040048039, 00:15:24.798 "mibps": 18.765300937687652, 00:15:24.798 "io_failed": 0, 00:15:24.798 "io_timeout": 0, 00:15:24.798 "avg_latency_us": 26330.957131379262, 00:15:24.798 "min_latency_us": 2383.1272727272726, 00:15:24.798 "max_latency_us": 19184.174545454545 00:15:24.799 } 00:15:24.799 ], 00:15:24.799 "core_count": 1 00:15:24.799 } 00:15:24.799 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84099 00:15:24.799 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84099 ']' 00:15:24.799 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84099 00:15:24.799 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:24.799 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.799 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84099 00:15:25.057 killing process with pid 84099 00:15:25.057 Received shutdown signal, test time was about 1.000000 seconds 00:15:25.057 00:15:25.057 Latency(us) 00:15:25.057 [2024-12-07T22:46:39.823Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.057 [2024-12-07T22:46:39.823Z] =================================================================================================================== 00:15:25.057 [2024-12-07T22:46:39.823Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84099' 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84099 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84099 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84051 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84051 ']' 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84051 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84051 00:15:25.057 killing process with pid 84051 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84051' 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84051 00:15:25.057 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84051 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84137 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84137 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84137 ']' 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.315 22:46:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.315 [2024-12-07 22:46:39.930771] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:25.315 [2024-12-07 22:46:39.930867] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.315 [2024-12-07 22:46:40.068955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.573 [2024-12-07 22:46:40.100204] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.573 [2024-12-07 22:46:40.100282] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.573 [2024-12-07 22:46:40.100292] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.574 [2024-12-07 22:46:40.100298] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.574 [2024-12-07 22:46:40.100304] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.574 [2024-12-07 22:46:40.100326] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.574 [2024-12-07 22:46:40.125824] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.141 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.141 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:26.141 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:26.141 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:26.141 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.401 [2024-12-07 22:46:40.920114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.401 malloc0 00:15:26.401 [2024-12-07 22:46:40.954141] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:26.401 [2024-12-07 22:46:40.954352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:26.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84170 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84170 /var/tmp/bdevperf.sock 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84170 ']' 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:26.401 22:46:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:26.401 [2024-12-07 22:46:41.040013] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:26.401 [2024-12-07 22:46:41.040297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84170 ] 00:15:26.661 [2024-12-07 22:46:41.179888] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.661 [2024-12-07 22:46:41.211788] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.661 [2024-12-07 22:46:41.238582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:26.661 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:26.661 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:26.661 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.oJIX4EO56n 00:15:26.920 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:27.180 [2024-12-07 22:46:41.765517] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:27.180 nvme0n1 00:15:27.180 22:46:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:27.439 Running I/O for 1 seconds... 00:15:28.378 4576.00 IOPS, 17.88 MiB/s 00:15:28.378 Latency(us) 00:15:28.378 [2024-12-07T22:46:43.144Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.378 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:28.378 Verification LBA range: start 0x0 length 0x2000 00:15:28.378 nvme0n1 : 1.03 4579.36 17.89 0.00 0.00 27590.68 7089.80 18945.86 00:15:28.378 [2024-12-07T22:46:43.144Z] =================================================================================================================== 00:15:28.378 [2024-12-07T22:46:43.144Z] Total : 4579.36 17.89 0.00 0.00 27590.68 7089.80 18945.86 00:15:28.378 { 00:15:28.378 "results": [ 00:15:28.378 { 00:15:28.378 "job": "nvme0n1", 00:15:28.378 "core_mask": "0x2", 00:15:28.378 "workload": "verify", 00:15:28.378 "status": "finished", 00:15:28.378 "verify_range": { 00:15:28.378 "start": 0, 00:15:28.378 "length": 8192 00:15:28.378 }, 00:15:28.378 "queue_depth": 128, 00:15:28.378 "io_size": 4096, 00:15:28.378 "runtime": 1.027217, 00:15:28.378 "iops": 4579.363464584406, 00:15:28.378 "mibps": 17.888138533532835, 00:15:28.378 "io_failed": 0, 00:15:28.378 "io_timeout": 0, 00:15:28.378 "avg_latency_us": 27590.6825974026, 00:15:28.378 "min_latency_us": 7089.8036363636365, 00:15:28.378 "max_latency_us": 18945.861818181816 00:15:28.378 } 00:15:28.378 ], 00:15:28.378 "core_count": 1 00:15:28.378 } 00:15:28.378 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:28.378 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.378 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.378 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.637 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:28.637 "subsystems": [ 00:15:28.637 { 00:15:28.637 "subsystem": "keyring", 00:15:28.637 "config": [ 00:15:28.637 { 00:15:28.637 "method": "keyring_file_add_key", 00:15:28.637 "params": { 00:15:28.637 "name": "key0", 00:15:28.638 "path": "/tmp/tmp.oJIX4EO56n" 00:15:28.638 } 00:15:28.638 } 00:15:28.638 ] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "iobuf", 00:15:28.638 "config": [ 00:15:28.638 { 00:15:28.638 "method": "iobuf_set_options", 00:15:28.638 "params": { 00:15:28.638 "small_pool_count": 8192, 00:15:28.638 "large_pool_count": 1024, 00:15:28.638 "small_bufsize": 8192, 00:15:28.638 "large_bufsize": 135168 00:15:28.638 } 00:15:28.638 } 00:15:28.638 ] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "sock", 00:15:28.638 "config": [ 00:15:28.638 { 00:15:28.638 "method": "sock_set_default_impl", 00:15:28.638 "params": { 00:15:28.638 "impl_name": "uring" 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "sock_impl_set_options", 00:15:28.638 "params": { 00:15:28.638 "impl_name": "ssl", 00:15:28.638 "recv_buf_size": 4096, 00:15:28.638 "send_buf_size": 4096, 00:15:28.638 "enable_recv_pipe": true, 00:15:28.638 "enable_quickack": false, 00:15:28.638 "enable_placement_id": 0, 00:15:28.638 "enable_zerocopy_send_server": true, 00:15:28.638 "enable_zerocopy_send_client": false, 00:15:28.638 "zerocopy_threshold": 0, 00:15:28.638 "tls_version": 0, 00:15:28.638 "enable_ktls": false 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "sock_impl_set_options", 00:15:28.638 "params": { 00:15:28.638 "impl_name": "posix", 00:15:28.638 "recv_buf_size": 2097152, 00:15:28.638 "send_buf_size": 2097152, 00:15:28.638 "enable_recv_pipe": true, 00:15:28.638 "enable_quickack": false, 00:15:28.638 "enable_placement_id": 0, 00:15:28.638 "enable_zerocopy_send_server": true, 00:15:28.638 "enable_zerocopy_send_client": false, 00:15:28.638 "zerocopy_threshold": 0, 00:15:28.638 "tls_version": 0, 00:15:28.638 "enable_ktls": false 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "sock_impl_set_options", 00:15:28.638 "params": { 00:15:28.638 "impl_name": "uring", 00:15:28.638 "recv_buf_size": 2097152, 00:15:28.638 "send_buf_size": 2097152, 00:15:28.638 "enable_recv_pipe": true, 00:15:28.638 "enable_quickack": false, 00:15:28.638 "enable_placement_id": 0, 00:15:28.638 "enable_zerocopy_send_server": false, 00:15:28.638 "enable_zerocopy_send_client": false, 00:15:28.638 "zerocopy_threshold": 0, 00:15:28.638 "tls_version": 0, 00:15:28.638 "enable_ktls": false 00:15:28.638 } 00:15:28.638 } 00:15:28.638 ] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "vmd", 00:15:28.638 "config": [] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "accel", 00:15:28.638 "config": [ 00:15:28.638 { 00:15:28.638 "method": "accel_set_options", 00:15:28.638 "params": { 00:15:28.638 "small_cache_size": 128, 00:15:28.638 "large_cache_size": 16, 00:15:28.638 "task_count": 2048, 00:15:28.638 "sequence_count": 2048, 00:15:28.638 "buf_count": 2048 00:15:28.638 } 00:15:28.638 } 00:15:28.638 ] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "bdev", 00:15:28.638 "config": [ 00:15:28.638 { 00:15:28.638 "method": "bdev_set_options", 00:15:28.638 "params": { 00:15:28.638 "bdev_io_pool_size": 65535, 00:15:28.638 "bdev_io_cache_size": 256, 00:15:28.638 "bdev_auto_examine": true, 00:15:28.638 "iobuf_small_cache_size": 128, 00:15:28.638 "iobuf_large_cache_size": 16 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "bdev_raid_set_options", 00:15:28.638 "params": { 00:15:28.638 "process_window_size_kb": 1024, 00:15:28.638 "process_max_bandwidth_mb_sec": 0 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "bdev_iscsi_set_options", 00:15:28.638 "params": { 00:15:28.638 "timeout_sec": 30 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "bdev_nvme_set_options", 00:15:28.638 "params": { 00:15:28.638 "action_on_timeout": "none", 00:15:28.638 "timeout_us": 0, 00:15:28.638 "timeout_admin_us": 0, 00:15:28.638 "keep_alive_timeout_ms": 10000, 00:15:28.638 "arbitration_burst": 0, 00:15:28.638 "low_priority_weight": 0, 00:15:28.638 "medium_priority_weight": 0, 00:15:28.638 "high_priority_weight": 0, 00:15:28.638 "nvme_adminq_poll_period_us": 10000, 00:15:28.638 "nvme_ioq_poll_period_us": 0, 00:15:28.638 "io_queue_requests": 0, 00:15:28.638 "delay_cmd_submit": true, 00:15:28.638 "transport_retry_count": 4, 00:15:28.638 "bdev_retry_count": 3, 00:15:28.638 "transport_ack_timeout": 0, 00:15:28.638 "ctrlr_loss_timeout_sec": 0, 00:15:28.638 "reconnect_delay_sec": 0, 00:15:28.638 "fast_io_fail_timeout_sec": 0, 00:15:28.638 "disable_auto_failback": false, 00:15:28.638 "generate_uuids": false, 00:15:28.638 "transport_tos": 0, 00:15:28.638 "nvme_error_stat": false, 00:15:28.638 "rdma_srq_size": 0, 00:15:28.638 "io_path_stat": false, 00:15:28.638 "allow_accel_sequence": false, 00:15:28.638 "rdma_max_cq_size": 0, 00:15:28.638 "rdma_cm_event_timeout_ms": 0, 00:15:28.638 "dhchap_digests": [ 00:15:28.638 "sha256", 00:15:28.638 "sha384", 00:15:28.638 "sha512" 00:15:28.638 ], 00:15:28.638 "dhchap_dhgroups": [ 00:15:28.638 "null", 00:15:28.638 "ffdhe2048", 00:15:28.638 "ffdhe3072", 00:15:28.638 "ffdhe4096", 00:15:28.638 "ffdhe6144", 00:15:28.638 "ffdhe8192" 00:15:28.638 ] 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "bdev_nvme_set_hotplug", 00:15:28.638 "params": { 00:15:28.638 "period_us": 100000, 00:15:28.638 "enable": false 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "bdev_malloc_create", 00:15:28.638 "params": { 00:15:28.638 "name": "malloc0", 00:15:28.638 "num_blocks": 8192, 00:15:28.638 "block_size": 4096, 00:15:28.638 "physical_block_size": 4096, 00:15:28.638 "uuid": "6a40ff03-ead3-47d0-9c97-3558462ca19b", 00:15:28.638 "optimal_io_boundary": 0, 00:15:28.638 "md_size": 0, 00:15:28.638 "dif_type": 0, 00:15:28.638 "dif_is_head_of_md": false, 00:15:28.638 "dif_pi_format": 0 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "bdev_wait_for_examine" 00:15:28.638 } 00:15:28.638 ] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "nbd", 00:15:28.638 "config": [] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "scheduler", 00:15:28.638 "config": [ 00:15:28.638 { 00:15:28.638 "method": "framework_set_scheduler", 00:15:28.638 "params": { 00:15:28.638 "name": "static" 00:15:28.638 } 00:15:28.638 } 00:15:28.638 ] 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "subsystem": "nvmf", 00:15:28.638 "config": [ 00:15:28.638 { 00:15:28.638 "method": "nvmf_set_config", 00:15:28.638 "params": { 00:15:28.638 "discovery_filter": "match_any", 00:15:28.638 "admin_cmd_passthru": { 00:15:28.638 "identify_ctrlr": false 00:15:28.638 }, 00:15:28.638 "dhchap_digests": [ 00:15:28.638 "sha256", 00:15:28.638 "sha384", 00:15:28.638 "sha512" 00:15:28.638 ], 00:15:28.638 "dhchap_dhgroups": [ 00:15:28.638 "null", 00:15:28.638 "ffdhe2048", 00:15:28.638 "ffdhe3072", 00:15:28.638 "ffdhe4096", 00:15:28.638 "ffdhe6144", 00:15:28.638 "ffdhe8192" 00:15:28.638 ] 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "nvmf_set_max_subsystems", 00:15:28.638 "params": { 00:15:28.638 "max_subsystems": 1024 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "nvmf_set_crdt", 00:15:28.638 "params": { 00:15:28.638 "crdt1": 0, 00:15:28.638 "crdt2": 0, 00:15:28.638 "crdt3": 0 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "nvmf_create_transport", 00:15:28.638 "params": { 00:15:28.638 "trtype": "TCP", 00:15:28.638 "max_queue_depth": 128, 00:15:28.638 "max_io_qpairs_per_ctrlr": 127, 00:15:28.638 "in_capsule_data_size": 4096, 00:15:28.638 "max_io_size": 131072, 00:15:28.638 "io_unit_size": 131072, 00:15:28.638 "max_aq_depth": 128, 00:15:28.638 "num_shared_buffers": 511, 00:15:28.638 "buf_cache_size": 4294967295, 00:15:28.638 "dif_insert_or_strip": false, 00:15:28.638 "zcopy": false, 00:15:28.638 "c2h_success": false, 00:15:28.638 "sock_priority": 0, 00:15:28.638 "abort_timeout_sec": 1, 00:15:28.638 "ack_timeout": 0, 00:15:28.638 "data_wr_pool_size": 0 00:15:28.638 } 00:15:28.638 }, 00:15:28.638 { 00:15:28.638 "method": "nvmf_create_subsystem", 00:15:28.638 "params": { 00:15:28.638 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.638 "allow_any_host": false, 00:15:28.638 "serial_number": "00000000000000000000", 00:15:28.638 "model_number": "SPDK bdev Controller", 00:15:28.638 "max_namespaces": 32, 00:15:28.638 "min_cntlid": 1, 00:15:28.639 "max_cntlid": 65519, 00:15:28.639 "ana_reporting": false 00:15:28.639 } 00:15:28.639 }, 00:15:28.639 { 00:15:28.639 "method": "nvmf_subsystem_add_host", 00:15:28.639 "params": { 00:15:28.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.639 "host": "nqn.2016-06.io.spdk:host1", 00:15:28.639 "psk": "key0" 00:15:28.639 } 00:15:28.639 }, 00:15:28.639 { 00:15:28.639 "method": "nvmf_subsystem_add_ns", 00:15:28.639 "params": { 00:15:28.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.639 "namespace": { 00:15:28.639 "nsid": 1, 00:15:28.639 "bdev_name": "malloc0", 00:15:28.639 "nguid": "6A40FF03EAD347D09C973558462CA19B", 00:15:28.639 "uuid": "6a40ff03-ead3-47d0-9c97-3558462ca19b", 00:15:28.639 "no_auto_visible": false 00:15:28.639 } 00:15:28.639 } 00:15:28.639 }, 00:15:28.639 { 00:15:28.639 "method": "nvmf_subsystem_add_listener", 00:15:28.639 "params": { 00:15:28.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.639 "listen_address": { 00:15:28.639 "trtype": "TCP", 00:15:28.639 "adrfam": "IPv4", 00:15:28.639 "traddr": "10.0.0.3", 00:15:28.639 "trsvcid": "4420" 00:15:28.639 }, 00:15:28.639 "secure_channel": false, 00:15:28.639 "sock_impl": "ssl" 00:15:28.639 } 00:15:28.639 } 00:15:28.639 ] 00:15:28.639 } 00:15:28.639 ] 00:15:28.639 }' 00:15:28.639 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:28.899 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:28.899 "subsystems": [ 00:15:28.899 { 00:15:28.899 "subsystem": "keyring", 00:15:28.899 "config": [ 00:15:28.899 { 00:15:28.899 "method": "keyring_file_add_key", 00:15:28.899 "params": { 00:15:28.899 "name": "key0", 00:15:28.899 "path": "/tmp/tmp.oJIX4EO56n" 00:15:28.899 } 00:15:28.899 } 00:15:28.899 ] 00:15:28.899 }, 00:15:28.899 { 00:15:28.899 "subsystem": "iobuf", 00:15:28.899 "config": [ 00:15:28.899 { 00:15:28.899 "method": "iobuf_set_options", 00:15:28.899 "params": { 00:15:28.899 "small_pool_count": 8192, 00:15:28.899 "large_pool_count": 1024, 00:15:28.899 "small_bufsize": 8192, 00:15:28.899 "large_bufsize": 135168 00:15:28.899 } 00:15:28.899 } 00:15:28.899 ] 00:15:28.899 }, 00:15:28.899 { 00:15:28.899 "subsystem": "sock", 00:15:28.899 "config": [ 00:15:28.899 { 00:15:28.899 "method": "sock_set_default_impl", 00:15:28.899 "params": { 00:15:28.899 "impl_name": "uring" 00:15:28.899 } 00:15:28.899 }, 00:15:28.899 { 00:15:28.899 "method": "sock_impl_set_options", 00:15:28.899 "params": { 00:15:28.899 "impl_name": "ssl", 00:15:28.899 "recv_buf_size": 4096, 00:15:28.899 "send_buf_size": 4096, 00:15:28.899 "enable_recv_pipe": true, 00:15:28.899 "enable_quickack": false, 00:15:28.899 "enable_placement_id": 0, 00:15:28.899 "enable_zerocopy_send_server": true, 00:15:28.899 "enable_zerocopy_send_client": false, 00:15:28.899 "zerocopy_threshold": 0, 00:15:28.899 "tls_version": 0, 00:15:28.899 "enable_ktls": false 00:15:28.899 } 00:15:28.899 }, 00:15:28.899 { 00:15:28.899 "method": "sock_impl_set_options", 00:15:28.899 "params": { 00:15:28.899 "impl_name": "posix", 00:15:28.899 "recv_buf_size": 2097152, 00:15:28.899 "send_buf_size": 2097152, 00:15:28.899 "enable_recv_pipe": true, 00:15:28.899 "enable_quickack": false, 00:15:28.899 "enable_placement_id": 0, 00:15:28.900 "enable_zerocopy_send_server": true, 00:15:28.900 "enable_zerocopy_send_client": false, 00:15:28.900 "zerocopy_threshold": 0, 00:15:28.900 "tls_version": 0, 00:15:28.900 "enable_ktls": false 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "sock_impl_set_options", 00:15:28.900 "params": { 00:15:28.900 "impl_name": "uring", 00:15:28.900 "recv_buf_size": 2097152, 00:15:28.900 "send_buf_size": 2097152, 00:15:28.900 "enable_recv_pipe": true, 00:15:28.900 "enable_quickack": false, 00:15:28.900 "enable_placement_id": 0, 00:15:28.900 "enable_zerocopy_send_server": false, 00:15:28.900 "enable_zerocopy_send_client": false, 00:15:28.900 "zerocopy_threshold": 0, 00:15:28.900 "tls_version": 0, 00:15:28.900 "enable_ktls": false 00:15:28.900 } 00:15:28.900 } 00:15:28.900 ] 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "subsystem": "vmd", 00:15:28.900 "config": [] 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "subsystem": "accel", 00:15:28.900 "config": [ 00:15:28.900 { 00:15:28.900 "method": "accel_set_options", 00:15:28.900 "params": { 00:15:28.900 "small_cache_size": 128, 00:15:28.900 "large_cache_size": 16, 00:15:28.900 "task_count": 2048, 00:15:28.900 "sequence_count": 2048, 00:15:28.900 "buf_count": 2048 00:15:28.900 } 00:15:28.900 } 00:15:28.900 ] 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "subsystem": "bdev", 00:15:28.900 "config": [ 00:15:28.900 { 00:15:28.900 "method": "bdev_set_options", 00:15:28.900 "params": { 00:15:28.900 "bdev_io_pool_size": 65535, 00:15:28.900 "bdev_io_cache_size": 256, 00:15:28.900 "bdev_auto_examine": true, 00:15:28.900 "iobuf_small_cache_size": 128, 00:15:28.900 "iobuf_large_cache_size": 16 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "bdev_raid_set_options", 00:15:28.900 "params": { 00:15:28.900 "process_window_size_kb": 1024, 00:15:28.900 "process_max_bandwidth_mb_sec": 0 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "bdev_iscsi_set_options", 00:15:28.900 "params": { 00:15:28.900 "timeout_sec": 30 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "bdev_nvme_set_options", 00:15:28.900 "params": { 00:15:28.900 "action_on_timeout": "none", 00:15:28.900 "timeout_us": 0, 00:15:28.900 "timeout_admin_us": 0, 00:15:28.900 "keep_alive_timeout_ms": 10000, 00:15:28.900 "arbitration_burst": 0, 00:15:28.900 "low_priority_weight": 0, 00:15:28.900 "medium_priority_weight": 0, 00:15:28.900 "high_priority_weight": 0, 00:15:28.900 "nvme_adminq_poll_period_us": 10000, 00:15:28.900 "nvme_ioq_poll_period_us": 0, 00:15:28.900 "io_queue_requests": 512, 00:15:28.900 "delay_cmd_submit": true, 00:15:28.900 "transport_retry_count": 4, 00:15:28.900 "bdev_retry_count": 3, 00:15:28.900 "transport_ack_timeout": 0, 00:15:28.900 "ctrlr_loss_timeout_sec": 0, 00:15:28.900 "reconnect_delay_sec": 0, 00:15:28.900 "fast_io_fail_timeout_sec": 0, 00:15:28.900 "disable_auto_failback": false, 00:15:28.900 "generate_uuids": false, 00:15:28.900 "transport_tos": 0, 00:15:28.900 "nvme_error_stat": false, 00:15:28.900 "rdma_srq_size": 0, 00:15:28.900 "io_path_stat": false, 00:15:28.900 "allow_accel_sequence": false, 00:15:28.900 "rdma_max_cq_size": 0, 00:15:28.900 "rdma_cm_event_timeout_ms": 0, 00:15:28.900 "dhchap_digests": [ 00:15:28.900 "sha256", 00:15:28.900 "sha384", 00:15:28.900 "sha512" 00:15:28.900 ], 00:15:28.900 "dhchap_dhgroups": [ 00:15:28.900 "null", 00:15:28.900 "ffdhe2048", 00:15:28.900 "ffdhe3072", 00:15:28.900 "ffdhe4096", 00:15:28.900 "ffdhe6144", 00:15:28.900 "ffdhe8192" 00:15:28.900 ] 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "bdev_nvme_attach_controller", 00:15:28.900 "params": { 00:15:28.900 "name": "nvme0", 00:15:28.900 "trtype": "TCP", 00:15:28.900 "adrfam": "IPv4", 00:15:28.900 "traddr": "10.0.0.3", 00:15:28.900 "trsvcid": "4420", 00:15:28.900 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:28.900 "prchk_reftag": false, 00:15:28.900 "prchk_guard": false, 00:15:28.900 "ctrlr_loss_timeout_sec": 0, 00:15:28.900 "reconnect_delay_sec": 0, 00:15:28.900 "fast_io_fail_timeout_sec": 0, 00:15:28.900 "psk": "key0", 00:15:28.900 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:28.900 "hdgst": false, 00:15:28.900 "ddgst": false 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "bdev_nvme_set_hotplug", 00:15:28.900 "params": { 00:15:28.900 "period_us": 100000, 00:15:28.900 "enable": false 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "bdev_enable_histogram", 00:15:28.900 "params": { 00:15:28.900 "name": "nvme0n1", 00:15:28.900 "enable": true 00:15:28.900 } 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "method": "bdev_wait_for_examine" 00:15:28.900 } 00:15:28.900 ] 00:15:28.900 }, 00:15:28.900 { 00:15:28.900 "subsystem": "nbd", 00:15:28.900 "config": [] 00:15:28.900 } 00:15:28.900 ] 00:15:28.900 }' 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84170 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84170 ']' 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84170 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84170 00:15:28.900 killing process with pid 84170 00:15:28.900 Received shutdown signal, test time was about 1.000000 seconds 00:15:28.900 00:15:28.900 Latency(us) 00:15:28.900 [2024-12-07T22:46:43.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:28.900 [2024-12-07T22:46:43.666Z] =================================================================================================================== 00:15:28.900 [2024-12-07T22:46:43.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84170' 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84170 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84170 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84137 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84137 ']' 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84137 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:28.900 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84137 00:15:29.161 killing process with pid 84137 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84137' 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84137 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84137 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:29.161 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:29.161 "subsystems": [ 00:15:29.161 { 00:15:29.161 "subsystem": "keyring", 00:15:29.161 "config": [ 00:15:29.161 { 00:15:29.161 "method": "keyring_file_add_key", 00:15:29.161 "params": { 00:15:29.161 "name": "key0", 00:15:29.161 "path": "/tmp/tmp.oJIX4EO56n" 00:15:29.161 } 00:15:29.161 } 00:15:29.161 ] 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "subsystem": "iobuf", 00:15:29.161 "config": [ 00:15:29.161 { 00:15:29.161 "method": "iobuf_set_options", 00:15:29.161 "params": { 00:15:29.161 "small_pool_count": 8192, 00:15:29.161 "large_pool_count": 1024, 00:15:29.161 "small_bufsize": 8192, 00:15:29.161 "large_bufsize": 135168 00:15:29.161 } 00:15:29.161 } 00:15:29.161 ] 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "subsystem": "sock", 00:15:29.161 "config": [ 00:15:29.161 { 00:15:29.161 "method": "sock_set_default_impl", 00:15:29.161 "params": { 00:15:29.161 "impl_name": "uring" 00:15:29.161 } 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "method": "sock_impl_set_options", 00:15:29.161 "params": { 00:15:29.161 "impl_name": "ssl", 00:15:29.161 "recv_buf_size": 4096, 00:15:29.161 "send_buf_size": 4096, 00:15:29.161 "enable_recv_pipe": true, 00:15:29.161 "enable_quickack": false, 00:15:29.161 "enable_placement_id": 0, 00:15:29.161 "enable_zerocopy_send_server": true, 00:15:29.161 "enable_zerocopy_send_client": false, 00:15:29.161 "zerocopy_threshold": 0, 00:15:29.161 "tls_version": 0, 00:15:29.161 "enable_ktls": false 00:15:29.161 } 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "method": "sock_impl_set_options", 00:15:29.161 "params": { 00:15:29.161 "impl_name": "posix", 00:15:29.161 "recv_buf_size": 2097152, 00:15:29.161 "send_buf_size": 2097152, 00:15:29.161 "enable_recv_pipe": true, 00:15:29.161 "enable_quickack": false, 00:15:29.161 "enable_placement_id": 0, 00:15:29.161 "enable_zerocopy_send_server": true, 00:15:29.161 "enable_zerocopy_send_client": false, 00:15:29.161 "zerocopy_threshold": 0, 00:15:29.161 "tls_version": 0, 00:15:29.161 "enable_ktls": false 00:15:29.161 } 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "method": "sock_impl_set_options", 00:15:29.161 "params": { 00:15:29.161 "impl_name": "uring", 00:15:29.161 "recv_buf_size": 2097152, 00:15:29.161 "send_buf_size": 2097152, 00:15:29.161 "enable_recv_pipe": true, 00:15:29.161 "enable_quickack": false, 00:15:29.161 "enable_placement_id": 0, 00:15:29.161 "enable_zerocopy_send_server": false, 00:15:29.161 "enable_zerocopy_send_client": false, 00:15:29.161 "zerocopy_threshold": 0, 00:15:29.161 "tls_version": 0, 00:15:29.161 "enable_ktls": false 00:15:29.161 } 00:15:29.161 } 00:15:29.161 ] 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "subsystem": "vmd", 00:15:29.161 "config": [] 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "subsystem": "accel", 00:15:29.161 "config": [ 00:15:29.161 { 00:15:29.161 "method": "accel_set_options", 00:15:29.161 "params": { 00:15:29.161 "small_cache_size": 128, 00:15:29.161 "large_cache_size": 16, 00:15:29.161 "task_count": 2048, 00:15:29.161 "sequence_count": 2048, 00:15:29.161 "buf_count": 2048 00:15:29.161 } 00:15:29.161 } 00:15:29.161 ] 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "subsystem": "bdev", 00:15:29.161 "config": [ 00:15:29.161 { 00:15:29.161 "method": "bdev_set_options", 00:15:29.161 "params": { 00:15:29.161 "bdev_io_pool_size": 65535, 00:15:29.161 "bdev_io_cache_size": 256, 00:15:29.161 "bdev_auto_examine": true, 00:15:29.161 "iobuf_small_cache_size": 128, 00:15:29.161 "iobuf_large_cache_size": 16 00:15:29.161 } 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "method": "bdev_raid_set_options", 00:15:29.161 "params": { 00:15:29.161 "process_window_size_kb": 1024, 00:15:29.161 "process_max_bandwidth_mb_sec": 0 00:15:29.161 } 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "method": "bdev_iscsi_set_options", 00:15:29.161 "params": { 00:15:29.161 "timeout_sec": 30 00:15:29.161 } 00:15:29.161 }, 00:15:29.161 { 00:15:29.161 "method": "bdev_nvme_set_options", 00:15:29.161 "params": { 00:15:29.161 "action_on_timeout": "none", 00:15:29.161 "timeout_us": 0, 00:15:29.161 "timeout_admin_us": 0, 00:15:29.161 "keep_alive_timeout_ms": 10000, 00:15:29.161 "arbitration_burst": 0, 00:15:29.161 "low_priority_weight": 0, 00:15:29.161 "medium_priority_weight": 0, 00:15:29.161 "high_priority_weight": 0, 00:15:29.161 "nvme_adminq_poll_period_us": 10000, 00:15:29.161 "nvme_ioq_poll_period_us": 0, 00:15:29.161 "io_queue_requests": 0, 00:15:29.161 "delay_cmd_submit": true, 00:15:29.161 "transport_retry_count": 4, 00:15:29.161 "bdev_retry_count": 3, 00:15:29.161 "transport_ack_timeout": 0, 00:15:29.161 "ctrlr_loss_timeout_sec": 0, 00:15:29.161 "reconnect_delay_sec": 0, 00:15:29.161 "fast_io_fail_timeout_sec": 0, 00:15:29.161 "disable_auto_failback": false, 00:15:29.161 "generate_uuids": false, 00:15:29.161 "transport_tos": 0, 00:15:29.161 "nvme_error_stat": false, 00:15:29.161 "rdma_srq_size": 0, 00:15:29.161 "io_path_stat": false, 00:15:29.161 "allow_accel_sequence": false, 00:15:29.161 "rdma_max_cq_size": 0, 00:15:29.161 "rdma_cm_event_timeout_ms": 0, 00:15:29.161 "dhchap_digests": [ 00:15:29.161 "sha256", 00:15:29.161 "sha384", 00:15:29.162 "sha512" 00:15:29.162 ], 00:15:29.162 "dhchap_dhgroups": [ 00:15:29.162 "null", 00:15:29.162 "ffdhe2048", 00:15:29.162 "ffdhe3072", 00:15:29.162 "ffdhe4096", 00:15:29.162 "ffdhe6144", 00:15:29.162 "ffdhe8192" 00:15:29.162 ] 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "bdev_nvme_set_hotplug", 00:15:29.162 "params": { 00:15:29.162 "period_us": 100000, 00:15:29.162 "enable": false 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "bdev_malloc_create", 00:15:29.162 "params": { 00:15:29.162 "name": "malloc0", 00:15:29.162 "num_blocks": 8192, 00:15:29.162 "block_size": 4096, 00:15:29.162 "physical_block_size": 4096, 00:15:29.162 "uuid": "6a40ff03-ead3-47d0-9c97-3558462ca19b", 00:15:29.162 "optimal_io_boundary": 0, 00:15:29.162 "md_size": 0, 00:15:29.162 "dif_type": 0, 00:15:29.162 "dif_is_head_of_md": false, 00:15:29.162 "dif_pi_format": 0 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "bdev_wait_for_examine" 00:15:29.162 } 00:15:29.162 ] 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "subsystem": "nbd", 00:15:29.162 "config": [] 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "subsystem": "scheduler", 00:15:29.162 "config": [ 00:15:29.162 { 00:15:29.162 "method": "framework_set_scheduler", 00:15:29.162 "params": { 00:15:29.162 "name": "static" 00:15:29.162 } 00:15:29.162 } 00:15:29.162 ] 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "subsystem": "nvmf", 00:15:29.162 "config": [ 00:15:29.162 { 00:15:29.162 "method": "nvmf_set_config", 00:15:29.162 "params": { 00:15:29.162 "discovery_filter": "match_any", 00:15:29.162 "admin_cmd_passthru": { 00:15:29.162 "identify_ctrlr": false 00:15:29.162 }, 00:15:29.162 "dhchap_digests": [ 00:15:29.162 "sha256", 00:15:29.162 "sha384", 00:15:29.162 "sha512" 00:15:29.162 ], 00:15:29.162 "dhchap_dhgroups": [ 00:15:29.162 "null", 00:15:29.162 "ffdhe2048", 00:15:29.162 "ffdhe3072", 00:15:29.162 "ffdhe4096", 00:15:29.162 "ffdhe6144", 00:15:29.162 "ffdhe8192" 00:15:29.162 ] 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "nvmf_set_max_subsystems", 00:15:29.162 "params": { 00:15:29.162 "max_subsystems": 1024 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "nvmf_set_crdt", 00:15:29.162 "params": { 00:15:29.162 "crdt1": 0, 00:15:29.162 "crdt2": 0, 00:15:29.162 "crdt3": 0 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "nvmf_create_transport", 00:15:29.162 "params": { 00:15:29.162 "trtype": "TCP", 00:15:29.162 "max_queue_depth": 128, 00:15:29.162 "max_io_qpairs_per_ctrlr": 127, 00:15:29.162 "in_capsule_data_size": 4096, 00:15:29.162 "max_io_size": 131072, 00:15:29.162 "io_unit_size": 131072, 00:15:29.162 "max_aq_depth": 128, 00:15:29.162 "num_shared_buffers": 511, 00:15:29.162 "buf_cache_size": 4294967295, 00:15:29.162 "dif_insert_or_strip": false, 00:15:29.162 "zcopy": false, 00:15:29.162 "c2h_success": false, 00:15:29.162 "sock_priority": 0, 00:15:29.162 "abort_timeout_sec": 1, 00:15:29.162 "ack_timeout": 0, 00:15:29.162 "data_wr_pool_size": 0 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "nvmf_create_subsystem", 00:15:29.162 "params": { 00:15:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.162 "allow_any_host": false, 00:15:29.162 "serial_number": "00000000000000000000", 00:15:29.162 "model_number": "SPDK bdev Controller", 00:15:29.162 "max_namespaces": 32, 00:15:29.162 "min_cntlid": 1, 00:15:29.162 "max_cntlid": 65519, 00:15:29.162 "ana_reporting": false 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "nvmf_subsystem_add_host", 00:15:29.162 "params": { 00:15:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.162 "host": "nqn.2016-06.io.spdk:host1", 00:15:29.162 "psk": "key0" 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "nvmf_subsystem_add_ns", 00:15:29.162 "params": { 00:15:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.162 "namespace": { 00:15:29.162 "nsid": 1, 00:15:29.162 "bdev_name": "malloc0", 00:15:29.162 "nguid": "6A40FF03EAD347D09C973558462CA19B", 00:15:29.162 "uuid": "6a40ff03-ead3-47d0-9c97-3558462ca19b", 00:15:29.162 "no_auto_visible": false 00:15:29.162 } 00:15:29.162 } 00:15:29.162 }, 00:15:29.162 { 00:15:29.162 "method": "nvmf_subsystem_add_listener", 00:15:29.162 "params": { 00:15:29.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:29.162 "listen_address": { 00:15:29.162 "trtype": "TCP", 00:15:29.162 "adrfam": "IPv4", 00:15:29.162 "traddr": "10.0.0.3", 00:15:29.162 "trsvcid": "4420" 00:15:29.162 }, 00:15:29.162 "secure_channel": false, 00:15:29.162 "sock_impl": "ssl" 00:15:29.162 } 00:15:29.162 } 00:15:29.162 ] 00:15:29.162 } 00:15:29.162 ] 00:15:29.162 }' 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84218 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84218 00:15:29.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84218 ']' 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.162 22:46:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.162 [2024-12-07 22:46:43.866269] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:29.162 [2024-12-07 22:46:43.866526] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.422 [2024-12-07 22:46:43.998487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.422 [2024-12-07 22:46:44.029342] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.422 [2024-12-07 22:46:44.029392] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.422 [2024-12-07 22:46:44.029418] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.422 [2024-12-07 22:46:44.029425] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.422 [2024-12-07 22:46:44.029431] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.422 [2024-12-07 22:46:44.029496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.422 [2024-12-07 22:46:44.170769] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:29.681 [2024-12-07 22:46:44.225541] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.681 [2024-12-07 22:46:44.269592] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:29.681 [2024-12-07 22:46:44.269782] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84250 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84250 /var/tmp/bdevperf.sock 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84250 ']' 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.249 22:46:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:30.249 "subsystems": [ 00:15:30.249 { 00:15:30.249 "subsystem": "keyring", 00:15:30.249 "config": [ 00:15:30.249 { 00:15:30.249 "method": "keyring_file_add_key", 00:15:30.249 "params": { 00:15:30.249 "name": "key0", 00:15:30.249 "path": "/tmp/tmp.oJIX4EO56n" 00:15:30.249 } 00:15:30.249 } 00:15:30.249 ] 00:15:30.249 }, 00:15:30.249 { 00:15:30.249 "subsystem": "iobuf", 00:15:30.249 "config": [ 00:15:30.249 { 00:15:30.249 "method": "iobuf_set_options", 00:15:30.249 "params": { 00:15:30.249 "small_pool_count": 8192, 00:15:30.249 "large_pool_count": 1024, 00:15:30.249 "small_bufsize": 8192, 00:15:30.249 "large_bufsize": 135168 00:15:30.249 } 00:15:30.249 } 00:15:30.249 ] 00:15:30.249 }, 00:15:30.249 { 00:15:30.249 "subsystem": "sock", 00:15:30.249 "config": [ 00:15:30.249 { 00:15:30.249 "method": "sock_set_default_impl", 00:15:30.249 "params": { 00:15:30.249 "impl_name": "uring" 00:15:30.249 } 00:15:30.249 }, 00:15:30.249 { 00:15:30.249 "method": "sock_impl_set_options", 00:15:30.249 "params": { 00:15:30.249 "impl_name": "ssl", 00:15:30.249 "recv_buf_size": 4096, 00:15:30.249 "send_buf_size": 4096, 00:15:30.249 "enable_recv_pipe": true, 00:15:30.249 "enable_quickack": false, 00:15:30.249 "enable_placement_id": 0, 00:15:30.249 "enable_zerocopy_send_server": true, 00:15:30.249 "enable_zerocopy_send_client": false, 00:15:30.249 "zerocopy_threshold": 0, 00:15:30.249 "tls_version": 0, 00:15:30.249 "enable_ktls": false 00:15:30.249 } 00:15:30.249 }, 00:15:30.249 { 00:15:30.249 "method": "sock_impl_set_options", 00:15:30.249 "params": { 00:15:30.249 "impl_name": "posix", 00:15:30.249 "recv_buf_size": 2097152, 00:15:30.249 "send_buf_size": 2097152, 00:15:30.249 "enable_recv_pipe": true, 00:15:30.249 "enable_quickack": false, 00:15:30.249 "enable_placement_id": 0, 00:15:30.249 "enable_zerocopy_send_server": true, 00:15:30.249 "enable_zerocopy_send_client": false, 00:15:30.249 "zerocopy_threshold": 0, 00:15:30.249 "tls_version": 0, 00:15:30.249 "enable_ktls": false 00:15:30.249 } 00:15:30.249 }, 00:15:30.249 { 00:15:30.249 "method": "sock_impl_set_options", 00:15:30.249 "params": { 00:15:30.250 "impl_name": "uring", 00:15:30.250 "recv_buf_size": 2097152, 00:15:30.250 "send_buf_size": 2097152, 00:15:30.250 "enable_recv_pipe": true, 00:15:30.250 "enable_quickack": false, 00:15:30.250 "enable_placement_id": 0, 00:15:30.250 "enable_zerocopy_send_server": false, 00:15:30.250 "enable_zerocopy_send_client": false, 00:15:30.250 "zerocopy_threshold": 0, 00:15:30.250 "tls_version": 0, 00:15:30.250 "enable_ktls": false 00:15:30.250 } 00:15:30.250 } 00:15:30.250 ] 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "subsystem": "vmd", 00:15:30.250 "config": [] 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "subsystem": "accel", 00:15:30.250 "config": [ 00:15:30.250 { 00:15:30.250 "method": "accel_set_options", 00:15:30.250 "params": { 00:15:30.250 "small_cache_size": 128, 00:15:30.250 "large_cache_size": 16, 00:15:30.250 "task_count": 2048, 00:15:30.250 "sequence_count": 2048, 00:15:30.250 "buf_count": 2048 00:15:30.250 } 00:15:30.250 } 00:15:30.250 ] 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "subsystem": "bdev", 00:15:30.250 "config": [ 00:15:30.250 { 00:15:30.250 "method": "bdev_set_options", 00:15:30.250 "params": { 00:15:30.250 "bdev_io_pool_size": 65535, 00:15:30.250 "bdev_io_cache_size": 256, 00:15:30.250 "bdev_auto_examine": true, 00:15:30.250 "iobuf_small_cache_size": 128, 00:15:30.250 "iobuf_large_cache_size": 16 00:15:30.250 } 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "method": "bdev_raid_set_options", 00:15:30.250 "params": { 00:15:30.250 "process_window_size_kb": 1024, 00:15:30.250 "process_max_bandwidth_mb_sec": 0 00:15:30.250 } 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "method": "bdev_iscsi_set_options", 00:15:30.250 "params": { 00:15:30.250 "timeout_sec": 30 00:15:30.250 } 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "method": "bdev_nvme_set_options", 00:15:30.250 "params": { 00:15:30.250 "action_on_timeout": "none", 00:15:30.250 "timeout_us": 0, 00:15:30.250 "timeout_admin_us": 0, 00:15:30.250 "keep_alive_timeout_ms": 10000, 00:15:30.250 "arbitration_burst": 0, 00:15:30.250 "low_priority_weight": 0, 00:15:30.250 "medium_priority_weight": 0, 00:15:30.250 "high_priority_weight": 0, 00:15:30.250 "nvme_adminq_poll_period_us": 10000, 00:15:30.250 "nvme_ioq_poll_period_us": 0, 00:15:30.250 "io_queue_requests": 512, 00:15:30.250 "delay_cmd_submit": true, 00:15:30.250 "transport_retry_count": 4, 00:15:30.250 "bdev_retry_count": 3, 00:15:30.250 "transport_ack_timeout": 0, 00:15:30.250 "ctrlr_loss_timeout_sec": 0, 00:15:30.250 "reconnect_delay_sec": 0, 00:15:30.250 "fast_io_fail_timeout_sec": 0, 00:15:30.250 "disable_auto_failback": false, 00:15:30.250 "generate_uuids": false, 00:15:30.250 "transport_tos": 0, 00:15:30.250 "nvme_error_stat": false, 00:15:30.250 "rdma_srq_size": 0, 00:15:30.250 "io_path_stat": false, 00:15:30.250 "allow_accel_sequence": false, 00:15:30.250 "rdma_max_cq_size": 0, 00:15:30.250 "rdma_cm_event_timeout_ms": 0, 00:15:30.250 "dhchap_digests": [ 00:15:30.250 "sha256", 00:15:30.250 "sha384", 00:15:30.250 "sha512" 00:15:30.250 ], 00:15:30.250 "dhchap_dhgroups": [ 00:15:30.250 "null", 00:15:30.250 "ffdhe2048", 00:15:30.250 "ffdhe3072", 00:15:30.250 "ffdhe4096", 00:15:30.250 "ffdhe6144", 00:15:30.250 "ffdhe8192" 00:15:30.250 ] 00:15:30.250 } 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "method": "bdev_nvme_attach_controller", 00:15:30.250 "params": { 00:15:30.250 "name": "nvme0", 00:15:30.250 "trtype": "TCP", 00:15:30.250 "adrfam": "IPv4", 00:15:30.250 "traddr": "10.0.0.3", 00:15:30.250 "trsvcid": "4420", 00:15:30.250 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.250 "prchk_reftag": false, 00:15:30.250 "prchk_guard": false, 00:15:30.250 "ctrlr_loss_timeout_sec": 0, 00:15:30.250 "reconnect_delay_sec": 0, 00:15:30.250 "fast_io_fail_timeout_sec": 0, 00:15:30.250 "psk": "key0", 00:15:30.250 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.250 "hdgst": false, 00:15:30.250 "ddgst": false 00:15:30.250 } 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "method": "bdev_nvme_set_hotplug", 00:15:30.250 "params": { 00:15:30.250 "period_us": 100000, 00:15:30.250 "enable": false 00:15:30.250 } 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "method": "bdev_enable_histogram", 00:15:30.250 "params": { 00:15:30.250 "name": "nvme0n1", 00:15:30.250 "enable": true 00:15:30.250 } 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "method": "bdev_wait_for_examine" 00:15:30.250 } 00:15:30.250 ] 00:15:30.250 }, 00:15:30.250 { 00:15:30.250 "subsystem": "nbd", 00:15:30.250 "config": [] 00:15:30.250 } 00:15:30.250 ] 00:15:30.250 }' 00:15:30.250 [2024-12-07 22:46:44.965799] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:30.250 [2024-12-07 22:46:44.965920] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84250 ] 00:15:30.515 [2024-12-07 22:46:45.106528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.515 [2024-12-07 22:46:45.147426] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.515 [2024-12-07 22:46:45.260829] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.773 [2024-12-07 22:46:45.292361] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:31.338 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.338 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:31.338 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:31.339 22:46:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:31.596 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.596 22:46:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:31.854 Running I/O for 1 seconds... 00:15:32.788 4507.00 IOPS, 17.61 MiB/s 00:15:32.788 Latency(us) 00:15:32.788 [2024-12-07T22:46:47.554Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.788 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.788 Verification LBA range: start 0x0 length 0x2000 00:15:32.788 nvme0n1 : 1.02 4557.58 17.80 0.00 0.00 27766.07 2576.76 19184.17 00:15:32.788 [2024-12-07T22:46:47.554Z] =================================================================================================================== 00:15:32.788 [2024-12-07T22:46:47.554Z] Total : 4557.58 17.80 0.00 0.00 27766.07 2576.76 19184.17 00:15:32.788 { 00:15:32.788 "results": [ 00:15:32.788 { 00:15:32.788 "job": "nvme0n1", 00:15:32.788 "core_mask": "0x2", 00:15:32.788 "workload": "verify", 00:15:32.788 "status": "finished", 00:15:32.788 "verify_range": { 00:15:32.788 "start": 0, 00:15:32.788 "length": 8192 00:15:32.788 }, 00:15:32.788 "queue_depth": 128, 00:15:32.788 "io_size": 4096, 00:15:32.788 "runtime": 1.017206, 00:15:32.788 "iops": 4557.582239978923, 00:15:32.788 "mibps": 17.80305562491767, 00:15:32.788 "io_failed": 0, 00:15:32.788 "io_timeout": 0, 00:15:32.788 "avg_latency_us": 27766.073649698013, 00:15:32.788 "min_latency_us": 2576.756363636364, 00:15:32.788 "max_latency_us": 19184.174545454545 00:15:32.788 } 00:15:32.788 ], 00:15:32.788 "core_count": 1 00:15:32.788 } 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:32.788 nvmf_trace.0 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84250 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84250 ']' 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84250 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.788 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84250 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:33.047 killing process with pid 84250 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84250' 00:15:33.047 Received shutdown signal, test time was about 1.000000 seconds 00:15:33.047 00:15:33.047 Latency(us) 00:15:33.047 [2024-12-07T22:46:47.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.047 [2024-12-07T22:46:47.813Z] =================================================================================================================== 00:15:33.047 [2024-12-07T22:46:47.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84250 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84250 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:33.047 rmmod nvme_tcp 00:15:33.047 rmmod nvme_fabrics 00:15:33.047 rmmod nvme_keyring 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 84218 ']' 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 84218 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84218 ']' 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84218 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.047 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84218 00:15:33.305 killing process with pid 84218 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84218' 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84218 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84218 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:33.305 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:15:33.306 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:33.306 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:33.306 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:33.306 22:46:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:33.306 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:33.306 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:33.306 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:33.306 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:33.306 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:33.306 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:33.306 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PcNjpxd77T /tmp/tmp.qqOLPJao6s /tmp/tmp.oJIX4EO56n 00:15:33.564 00:15:33.564 real 1m19.469s 00:15:33.564 user 2m9.208s 00:15:33.564 sys 0m25.755s 00:15:33.564 ************************************ 00:15:33.564 END TEST nvmf_tls 00:15:33.564 ************************************ 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.564 ************************************ 00:15:33.564 START TEST nvmf_fips 00:15:33.564 ************************************ 00:15:33.564 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:33.823 * Looking for test storage... 00:15:33.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.823 --rc genhtml_branch_coverage=1 00:15:33.823 --rc genhtml_function_coverage=1 00:15:33.823 --rc genhtml_legend=1 00:15:33.823 --rc geninfo_all_blocks=1 00:15:33.823 --rc geninfo_unexecuted_blocks=1 00:15:33.823 00:15:33.823 ' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.823 --rc genhtml_branch_coverage=1 00:15:33.823 --rc genhtml_function_coverage=1 00:15:33.823 --rc genhtml_legend=1 00:15:33.823 --rc geninfo_all_blocks=1 00:15:33.823 --rc geninfo_unexecuted_blocks=1 00:15:33.823 00:15:33.823 ' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.823 --rc genhtml_branch_coverage=1 00:15:33.823 --rc genhtml_function_coverage=1 00:15:33.823 --rc genhtml_legend=1 00:15:33.823 --rc geninfo_all_blocks=1 00:15:33.823 --rc geninfo_unexecuted_blocks=1 00:15:33.823 00:15:33.823 ' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:33.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:33.823 --rc genhtml_branch_coverage=1 00:15:33.823 --rc genhtml_function_coverage=1 00:15:33.823 --rc genhtml_legend=1 00:15:33.823 --rc geninfo_all_blocks=1 00:15:33.823 --rc geninfo_unexecuted_blocks=1 00:15:33.823 00:15:33.823 ' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:33.823 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:33.824 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:33.824 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:34.083 Error setting digest 00:15:34.083 40F284365E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:34.083 40F284365E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:34.083 Cannot find device "nvmf_init_br" 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:34.083 Cannot find device "nvmf_init_br2" 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:34.083 Cannot find device "nvmf_tgt_br" 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.083 Cannot find device "nvmf_tgt_br2" 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:34.083 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:34.083 Cannot find device "nvmf_init_br" 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:34.084 Cannot find device "nvmf_init_br2" 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:34.084 Cannot find device "nvmf_tgt_br" 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:34.084 Cannot find device "nvmf_tgt_br2" 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:34.084 Cannot find device "nvmf_br" 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:34.084 Cannot find device "nvmf_init_if" 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:34.084 Cannot find device "nvmf_init_if2" 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.084 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.084 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:34.343 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.343 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:15:34.343 00:15:34.343 --- 10.0.0.3 ping statistics --- 00:15:34.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.343 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:34.343 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:34.343 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:34.343 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:15:34.343 00:15:34.343 --- 10.0.0.4 ping statistics --- 00:15:34.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.344 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:34.344 00:15:34.344 --- 10.0.0.1 ping statistics --- 00:15:34.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.344 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:34.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:34.344 00:15:34.344 --- 10.0.0.2 ping statistics --- 00:15:34.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.344 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:34.344 22:46:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=84572 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 84572 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84572 ']' 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.344 22:46:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.602 [2024-12-07 22:46:49.110643] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:34.602 [2024-12-07 22:46:49.111315] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.602 [2024-12-07 22:46:49.259837] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.602 [2024-12-07 22:46:49.300896] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.602 [2024-12-07 22:46:49.301219] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.602 [2024-12-07 22:46:49.301254] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.602 [2024-12-07 22:46:49.301266] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.602 [2024-12-07 22:46:49.301275] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.602 [2024-12-07 22:46:49.301320] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.602 [2024-12-07 22:46:49.335173] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.N9v 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.N9v 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.N9v 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.N9v 00:15:35.539 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.798 [2024-12-07 22:46:50.385602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.798 [2024-12-07 22:46:50.401555] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:35.798 [2024-12-07 22:46:50.401737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:35.798 malloc0 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84608 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84608 /var/tmp/bdevperf.sock 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84608 ']' 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:35.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.798 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:35.798 [2024-12-07 22:46:50.552299] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:35.798 [2024-12-07 22:46:50.552388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84608 ] 00:15:36.057 [2024-12-07 22:46:50.693011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.057 [2024-12-07 22:46:50.731395] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.057 [2024-12-07 22:46:50.758596] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.057 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.057 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:36.057 22:46:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.N9v 00:15:36.316 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:36.575 [2024-12-07 22:46:51.241493] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:36.575 TLSTESTn1 00:15:36.575 22:46:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:36.834 Running I/O for 10 seconds... 00:15:38.752 4293.00 IOPS, 16.77 MiB/s [2024-12-07T22:46:54.895Z] 4348.00 IOPS, 16.98 MiB/s [2024-12-07T22:46:55.827Z] 4410.67 IOPS, 17.23 MiB/s [2024-12-07T22:46:56.765Z] 4444.50 IOPS, 17.36 MiB/s [2024-12-07T22:46:57.703Z] 4420.60 IOPS, 17.27 MiB/s [2024-12-07T22:46:58.639Z] 4431.17 IOPS, 17.31 MiB/s [2024-12-07T22:46:59.578Z] 4442.00 IOPS, 17.35 MiB/s [2024-12-07T22:47:00.516Z] 4447.25 IOPS, 17.37 MiB/s [2024-12-07T22:47:01.895Z] 4451.44 IOPS, 17.39 MiB/s [2024-12-07T22:47:01.895Z] 4458.30 IOPS, 17.42 MiB/s 00:15:47.129 Latency(us) 00:15:47.129 [2024-12-07T22:47:01.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.129 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:47.129 Verification LBA range: start 0x0 length 0x2000 00:15:47.129 TLSTESTn1 : 10.01 4464.23 17.44 0.00 0.00 28622.49 5600.35 22639.71 00:15:47.129 [2024-12-07T22:47:01.895Z] =================================================================================================================== 00:15:47.129 [2024-12-07T22:47:01.895Z] Total : 4464.23 17.44 0.00 0.00 28622.49 5600.35 22639.71 00:15:47.129 { 00:15:47.129 "results": [ 00:15:47.129 { 00:15:47.129 "job": "TLSTESTn1", 00:15:47.129 "core_mask": "0x4", 00:15:47.129 "workload": "verify", 00:15:47.129 "status": "finished", 00:15:47.129 "verify_range": { 00:15:47.129 "start": 0, 00:15:47.129 "length": 8192 00:15:47.129 }, 00:15:47.129 "queue_depth": 128, 00:15:47.129 "io_size": 4096, 00:15:47.129 "runtime": 10.014946, 00:15:47.129 "iops": 4464.227765182159, 00:15:47.129 "mibps": 17.438389707742807, 00:15:47.129 "io_failed": 0, 00:15:47.129 "io_timeout": 0, 00:15:47.129 "avg_latency_us": 28622.49243516152, 00:15:47.129 "min_latency_us": 5600.349090909091, 00:15:47.129 "max_latency_us": 22639.70909090909 00:15:47.129 } 00:15:47.129 ], 00:15:47.129 "core_count": 1 00:15:47.129 } 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:47.129 nvmf_trace.0 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84608 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84608 ']' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84608 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84608 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:47.129 killing process with pid 84608 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84608' 00:15:47.129 Received shutdown signal, test time was about 10.000000 seconds 00:15:47.129 00:15:47.129 Latency(us) 00:15:47.129 [2024-12-07T22:47:01.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:47.129 [2024-12-07T22:47:01.895Z] =================================================================================================================== 00:15:47.129 [2024-12-07T22:47:01.895Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84608 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84608 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:47.129 rmmod nvme_tcp 00:15:47.129 rmmod nvme_fabrics 00:15:47.129 rmmod nvme_keyring 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 84572 ']' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 84572 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84572 ']' 00:15:47.129 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84572 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84572 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:47.389 killing process with pid 84572 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84572' 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84572 00:15:47.389 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84572 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.389 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:47.649 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.N9v 00:15:47.650 00:15:47.650 real 0m14.076s 00:15:47.650 user 0m19.111s 00:15:47.650 sys 0m5.598s 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:47.650 ************************************ 00:15:47.650 END TEST nvmf_fips 00:15:47.650 ************************************ 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:47.650 ************************************ 00:15:47.650 START TEST nvmf_control_msg_list 00:15:47.650 ************************************ 00:15:47.650 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:47.911 * Looking for test storage... 00:15:47.911 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:47.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.911 --rc genhtml_branch_coverage=1 00:15:47.911 --rc genhtml_function_coverage=1 00:15:47.911 --rc genhtml_legend=1 00:15:47.911 --rc geninfo_all_blocks=1 00:15:47.911 --rc geninfo_unexecuted_blocks=1 00:15:47.911 00:15:47.911 ' 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:47.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.911 --rc genhtml_branch_coverage=1 00:15:47.911 --rc genhtml_function_coverage=1 00:15:47.911 --rc genhtml_legend=1 00:15:47.911 --rc geninfo_all_blocks=1 00:15:47.911 --rc geninfo_unexecuted_blocks=1 00:15:47.911 00:15:47.911 ' 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:47.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.911 --rc genhtml_branch_coverage=1 00:15:47.911 --rc genhtml_function_coverage=1 00:15:47.911 --rc genhtml_legend=1 00:15:47.911 --rc geninfo_all_blocks=1 00:15:47.911 --rc geninfo_unexecuted_blocks=1 00:15:47.911 00:15:47.911 ' 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:47.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:47.911 --rc genhtml_branch_coverage=1 00:15:47.911 --rc genhtml_function_coverage=1 00:15:47.911 --rc genhtml_legend=1 00:15:47.911 --rc geninfo_all_blocks=1 00:15:47.911 --rc geninfo_unexecuted_blocks=1 00:15:47.911 00:15:47.911 ' 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.911 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:47.912 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:47.912 Cannot find device "nvmf_init_br" 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:47.912 Cannot find device "nvmf_init_br2" 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:47.912 Cannot find device "nvmf_tgt_br" 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:47.912 Cannot find device "nvmf_tgt_br2" 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:47.912 Cannot find device "nvmf_init_br" 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:47.912 Cannot find device "nvmf_init_br2" 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:47.912 Cannot find device "nvmf_tgt_br" 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:47.912 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:48.172 Cannot find device "nvmf_tgt_br2" 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:48.172 Cannot find device "nvmf_br" 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:48.172 Cannot find device "nvmf_init_if" 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:48.172 Cannot find device "nvmf_init_if2" 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.172 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:48.172 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.173 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:48.433 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.433 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:48.433 00:15:48.433 --- 10.0.0.3 ping statistics --- 00:15:48.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.433 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:48.433 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:48.433 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:15:48.433 00:15:48.433 --- 10.0.0.4 ping statistics --- 00:15:48.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.433 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:48.433 00:15:48.433 --- 10.0.0.1 ping statistics --- 00:15:48.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.433 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:48.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:15:48.433 00:15:48.433 --- 10.0.0.2 ping statistics --- 00:15:48.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.433 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:48.433 22:47:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:48.433 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:48.433 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:48.433 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=84985 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 84985 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 84985 ']' 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.434 22:47:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:48.434 [2024-12-07 22:47:03.065128] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:48.434 [2024-12-07 22:47:03.065239] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:48.694 [2024-12-07 22:47:03.207103] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.694 [2024-12-07 22:47:03.248241] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:48.694 [2024-12-07 22:47:03.248304] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:48.694 [2024-12-07 22:47:03.248318] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:48.694 [2024-12-07 22:47:03.248330] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:48.694 [2024-12-07 22:47:03.248338] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:48.694 [2024-12-07 22:47:03.248370] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.694 [2024-12-07 22:47:03.280865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.631 [2024-12-07 22:47:04.126935] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.631 Malloc0 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.631 [2024-12-07 22:47:04.179097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85017 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85018 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85019 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85017 00:15:49.631 22:47:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:49.631 [2024-12-07 22:47:04.353372] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:49.631 [2024-12-07 22:47:04.363779] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:49.631 [2024-12-07 22:47:04.364192] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:51.007 Initializing NVMe Controllers 00:15:51.007 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.007 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:51.007 Initialization complete. Launching workers. 00:15:51.007 ======================================================== 00:15:51.007 Latency(us) 00:15:51.007 Device Information : IOPS MiB/s Average min max 00:15:51.007 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3678.93 14.37 271.52 122.49 567.04 00:15:51.007 ======================================================== 00:15:51.007 Total : 3678.93 14.37 271.52 122.49 567.04 00:15:51.007 00:15:51.007 Initializing NVMe Controllers 00:15:51.007 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.007 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:51.007 Initialization complete. Launching workers. 00:15:51.007 ======================================================== 00:15:51.007 Latency(us) 00:15:51.007 Device Information : IOPS MiB/s Average min max 00:15:51.007 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3677.00 14.36 271.60 139.20 542.51 00:15:51.007 ======================================================== 00:15:51.007 Total : 3677.00 14.36 271.60 139.20 542.51 00:15:51.007 00:15:51.007 Initializing NVMe Controllers 00:15:51.007 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.007 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:51.007 Initialization complete. Launching workers. 00:15:51.007 ======================================================== 00:15:51.007 Latency(us) 00:15:51.007 Device Information : IOPS MiB/s Average min max 00:15:51.007 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3670.00 14.34 272.10 164.79 567.34 00:15:51.007 ======================================================== 00:15:51.007 Total : 3670.00 14.34 272.10 164.79 567.34 00:15:51.007 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85018 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85019 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.007 rmmod nvme_tcp 00:15:51.007 rmmod nvme_fabrics 00:15:51.007 rmmod nvme_keyring 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 84985 ']' 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 84985 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 84985 ']' 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 84985 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:51.007 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84985 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:51.008 killing process with pid 84985 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84985' 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 84985 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 84985 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.008 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:51.266 00:15:51.266 real 0m3.515s 00:15:51.266 user 0m5.589s 00:15:51.266 sys 0m1.348s 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:51.266 ************************************ 00:15:51.266 END TEST nvmf_control_msg_list 00:15:51.266 ************************************ 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.266 ************************************ 00:15:51.266 START TEST nvmf_wait_for_buf 00:15:51.266 ************************************ 00:15:51.266 22:47:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:51.526 * Looking for test storage... 00:15:51.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.527 --rc genhtml_branch_coverage=1 00:15:51.527 --rc genhtml_function_coverage=1 00:15:51.527 --rc genhtml_legend=1 00:15:51.527 --rc geninfo_all_blocks=1 00:15:51.527 --rc geninfo_unexecuted_blocks=1 00:15:51.527 00:15:51.527 ' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.527 --rc genhtml_branch_coverage=1 00:15:51.527 --rc genhtml_function_coverage=1 00:15:51.527 --rc genhtml_legend=1 00:15:51.527 --rc geninfo_all_blocks=1 00:15:51.527 --rc geninfo_unexecuted_blocks=1 00:15:51.527 00:15:51.527 ' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.527 --rc genhtml_branch_coverage=1 00:15:51.527 --rc genhtml_function_coverage=1 00:15:51.527 --rc genhtml_legend=1 00:15:51.527 --rc geninfo_all_blocks=1 00:15:51.527 --rc geninfo_unexecuted_blocks=1 00:15:51.527 00:15:51.527 ' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:51.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.527 --rc genhtml_branch_coverage=1 00:15:51.527 --rc genhtml_function_coverage=1 00:15:51.527 --rc genhtml_legend=1 00:15:51.527 --rc geninfo_all_blocks=1 00:15:51.527 --rc geninfo_unexecuted_blocks=1 00:15:51.527 00:15:51.527 ' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:51.527 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:51.527 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:51.528 Cannot find device "nvmf_init_br" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:51.528 Cannot find device "nvmf_init_br2" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:51.528 Cannot find device "nvmf_tgt_br" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.528 Cannot find device "nvmf_tgt_br2" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:51.528 Cannot find device "nvmf_init_br" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:51.528 Cannot find device "nvmf_init_br2" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:51.528 Cannot find device "nvmf_tgt_br" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:51.528 Cannot find device "nvmf_tgt_br2" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:51.528 Cannot find device "nvmf_br" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:51.528 Cannot find device "nvmf_init_if" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:51.528 Cannot find device "nvmf_init_if2" 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:51.528 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.788 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:51.788 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:51.788 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:51.788 00:15:51.788 --- 10.0.0.3 ping statistics --- 00:15:51.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.788 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:51.788 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:51.788 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:51.788 00:15:51.788 --- 10.0.0.4 ping statistics --- 00:15:51.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.788 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:51.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:51.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:51.788 00:15:51.788 --- 10.0.0.1 ping statistics --- 00:15:51.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:51.788 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:51.788 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.047 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.047 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:15:52.047 00:15:52.047 --- 10.0.0.2 ping statistics --- 00:15:52.047 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.047 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:15:52.047 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.047 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:15:52.047 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:52.047 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.047 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:52.047 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=85251 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 85251 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 85251 ']' 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.048 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.048 [2024-12-07 22:47:06.641389] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:52.048 [2024-12-07 22:47:06.641489] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.048 [2024-12-07 22:47:06.776973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.306 [2024-12-07 22:47:06.817919] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:52.306 [2024-12-07 22:47:06.817984] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:52.306 [2024-12-07 22:47:06.817997] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:52.306 [2024-12-07 22:47:06.818007] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:52.306 [2024-12-07 22:47:06.818016] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:52.306 [2024-12-07 22:47:06.818046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 [2024-12-07 22:47:06.957237] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 Malloc0 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 [2024-12-07 22:47:06.994230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.306 [2024-12-07 22:47:07.018260] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:52.306 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.307 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:52.564 [2024-12-07 22:47:07.195042] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:53.940 Initializing NVMe Controllers 00:15:53.940 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:53.940 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:53.940 Initialization complete. Launching workers. 00:15:53.940 ======================================================== 00:15:53.940 Latency(us) 00:15:53.940 Device Information : IOPS MiB/s Average min max 00:15:53.940 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 498.00 62.25 8032.41 5074.77 11914.22 00:15:53.940 ======================================================== 00:15:53.940 Total : 498.00 62.25 8032.41 5074.77 11914.22 00:15:53.940 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.940 rmmod nvme_tcp 00:15:53.940 rmmod nvme_fabrics 00:15:53.940 rmmod nvme_keyring 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 85251 ']' 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 85251 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 85251 ']' 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 85251 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85251 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85251' 00:15:53.940 killing process with pid 85251 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 85251 00:15:53.940 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 85251 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:54.199 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:54.458 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:54.458 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:54.458 00:15:54.458 real 0m3.106s 00:15:54.458 user 0m2.456s 00:15:54.458 sys 0m0.723s 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:54.458 ************************************ 00:15:54.458 END TEST nvmf_wait_for_buf 00:15:54.458 ************************************ 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:54.458 22:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.459 22:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.459 22:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.459 ************************************ 00:15:54.459 START TEST nvmf_fuzz 00:15:54.459 ************************************ 00:15:54.459 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:54.459 * Looking for test storage... 00:15:54.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.459 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.459 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.459 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.718 --rc genhtml_branch_coverage=1 00:15:54.718 --rc genhtml_function_coverage=1 00:15:54.718 --rc genhtml_legend=1 00:15:54.718 --rc geninfo_all_blocks=1 00:15:54.718 --rc geninfo_unexecuted_blocks=1 00:15:54.718 00:15:54.718 ' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.718 --rc genhtml_branch_coverage=1 00:15:54.718 --rc genhtml_function_coverage=1 00:15:54.718 --rc genhtml_legend=1 00:15:54.718 --rc geninfo_all_blocks=1 00:15:54.718 --rc geninfo_unexecuted_blocks=1 00:15:54.718 00:15:54.718 ' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.718 --rc genhtml_branch_coverage=1 00:15:54.718 --rc genhtml_function_coverage=1 00:15:54.718 --rc genhtml_legend=1 00:15:54.718 --rc geninfo_all_blocks=1 00:15:54.718 --rc geninfo_unexecuted_blocks=1 00:15:54.718 00:15:54.718 ' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.718 --rc genhtml_branch_coverage=1 00:15:54.718 --rc genhtml_function_coverage=1 00:15:54.718 --rc genhtml_legend=1 00:15:54.718 --rc geninfo_all_blocks=1 00:15:54.718 --rc geninfo_unexecuted_blocks=1 00:15:54.718 00:15:54.718 ' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.718 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.719 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.719 Cannot find device "nvmf_init_br" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.719 Cannot find device "nvmf_init_br2" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.719 Cannot find device "nvmf_tgt_br" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.719 Cannot find device "nvmf_tgt_br2" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.719 Cannot find device "nvmf_init_br" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.719 Cannot find device "nvmf_init_br2" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.719 Cannot find device "nvmf_tgt_br" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.719 Cannot find device "nvmf_tgt_br2" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.719 Cannot find device "nvmf_br" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:54.719 Cannot find device "nvmf_init_if" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:54.719 Cannot find device "nvmf_init_if2" 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.719 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:54.719 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:54.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:15:54.978 00:15:54.978 --- 10.0.0.3 ping statistics --- 00:15:54.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.978 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:54.978 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:54.978 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:15:54.978 00:15:54.978 --- 10.0.0.4 ping statistics --- 00:15:54.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.978 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:54.978 00:15:54.978 --- 10.0.0.1 ping statistics --- 00:15:54.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.978 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:54.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:15:54.978 00:15:54.978 --- 10.0.0.2 ping statistics --- 00:15:54.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.978 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.978 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:15:54.979 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:54.979 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.979 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:54.979 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:54.979 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.979 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:54.979 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85508 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85508 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 85508 ']' 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:55.238 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.497 Malloc0 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:55.497 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:55.756 Shutting down the fuzz application 00:15:55.756 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:56.016 Shutting down the fuzz application 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:56.016 rmmod nvme_tcp 00:15:56.016 rmmod nvme_fabrics 00:15:56.016 rmmod nvme_keyring 00:15:56.016 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 85508 ']' 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 85508 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 85508 ']' 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 85508 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85508 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.277 killing process with pid 85508 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85508' 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 85508 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 85508 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:56.277 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:56.277 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:56.277 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:56.537 00:15:56.537 real 0m2.122s 00:15:56.537 user 0m1.725s 00:15:56.537 sys 0m0.654s 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.537 ************************************ 00:15:56.537 END TEST nvmf_fuzz 00:15:56.537 ************************************ 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.537 ************************************ 00:15:56.537 START TEST nvmf_multiconnection 00:15:56.537 ************************************ 00:15:56.537 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:56.795 * Looking for test storage... 00:15:56.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:56.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.795 --rc genhtml_branch_coverage=1 00:15:56.795 --rc genhtml_function_coverage=1 00:15:56.795 --rc genhtml_legend=1 00:15:56.795 --rc geninfo_all_blocks=1 00:15:56.795 --rc geninfo_unexecuted_blocks=1 00:15:56.795 00:15:56.795 ' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:56.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.795 --rc genhtml_branch_coverage=1 00:15:56.795 --rc genhtml_function_coverage=1 00:15:56.795 --rc genhtml_legend=1 00:15:56.795 --rc geninfo_all_blocks=1 00:15:56.795 --rc geninfo_unexecuted_blocks=1 00:15:56.795 00:15:56.795 ' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:56.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.795 --rc genhtml_branch_coverage=1 00:15:56.795 --rc genhtml_function_coverage=1 00:15:56.795 --rc genhtml_legend=1 00:15:56.795 --rc geninfo_all_blocks=1 00:15:56.795 --rc geninfo_unexecuted_blocks=1 00:15:56.795 00:15:56.795 ' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:56.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.795 --rc genhtml_branch_coverage=1 00:15:56.795 --rc genhtml_function_coverage=1 00:15:56.795 --rc genhtml_legend=1 00:15:56.795 --rc geninfo_all_blocks=1 00:15:56.795 --rc geninfo_unexecuted_blocks=1 00:15:56.795 00:15:56.795 ' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.795 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:56.796 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:56.796 Cannot find device "nvmf_init_br" 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:56.796 Cannot find device "nvmf_init_br2" 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:56.796 Cannot find device "nvmf_tgt_br" 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.796 Cannot find device "nvmf_tgt_br2" 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:56.796 Cannot find device "nvmf_init_br" 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:56.796 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.054 Cannot find device "nvmf_init_br2" 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.054 Cannot find device "nvmf_tgt_br" 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.054 Cannot find device "nvmf_tgt_br2" 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.054 Cannot find device "nvmf_br" 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.054 Cannot find device "nvmf_init_if" 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.054 Cannot find device "nvmf_init_if2" 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:57.054 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:57.313 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:57.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:15:57.313 00:15:57.313 --- 10.0.0.3 ping statistics --- 00:15:57.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.314 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:57.314 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:57.314 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:15:57.314 00:15:57.314 --- 10.0.0.4 ping statistics --- 00:15:57.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.314 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:15:57.314 00:15:57.314 --- 10.0.0.1 ping statistics --- 00:15:57.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.314 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:57.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:15:57.314 00:15:57.314 --- 10.0.0.2 ping statistics --- 00:15:57.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.314 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=85740 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 85740 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 85740 ']' 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.314 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:57.314 [2024-12-07 22:47:11.984499] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:15:57.314 [2024-12-07 22:47:11.984603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.573 [2024-12-07 22:47:12.117248] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.573 [2024-12-07 22:47:12.156432] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.573 [2024-12-07 22:47:12.156490] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.573 [2024-12-07 22:47:12.156506] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.573 [2024-12-07 22:47:12.156517] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.573 [2024-12-07 22:47:12.156526] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.573 [2024-12-07 22:47:12.156694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.573 [2024-12-07 22:47:12.156791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.573 [2024-12-07 22:47:12.156935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.573 [2024-12-07 22:47:12.156939] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.573 [2024-12-07 22:47:12.190088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.563 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.563 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:15:58.563 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:58.563 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.563 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 [2024-12-07 22:47:13.012529] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 Malloc1 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 [2024-12-07 22:47:13.063464] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 Malloc2 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 Malloc3 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.563 Malloc4 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.563 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 Malloc5 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 Malloc6 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 Malloc7 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.564 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 Malloc8 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 Malloc9 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 Malloc10 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 Malloc11 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.839 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.840 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.840 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:58.840 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:58.840 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:15:59.097 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:15:59.097 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.097 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.097 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:59.097 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:01.000 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:03.531 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:05.436 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:05.436 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:05.436 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:05.436 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:05.436 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:05.436 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:07.342 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:07.601 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:07.602 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.602 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.602 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:07.602 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:09.505 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:09.763 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:09.763 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:09.763 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.763 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:09.763 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:11.662 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:11.920 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:11.920 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.920 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.920 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:11.920 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:13.824 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:14.084 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:14.084 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:14.084 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:14.084 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:14.084 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:15.989 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:16.248 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:16.248 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:16.248 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.248 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:16.248 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:18.154 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:18.413 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:18.413 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:18.413 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:18.413 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:18.413 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:20.319 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:20.319 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:20.319 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:20.578 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:22.504 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:22.504 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:22.504 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:22.763 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:22.763 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:22.763 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:22.763 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:22.763 [global] 00:16:22.763 thread=1 00:16:22.763 invalidate=1 00:16:22.763 rw=read 00:16:22.763 time_based=1 00:16:22.764 runtime=10 00:16:22.764 ioengine=libaio 00:16:22.764 direct=1 00:16:22.764 bs=262144 00:16:22.764 iodepth=64 00:16:22.764 norandommap=1 00:16:22.764 numjobs=1 00:16:22.764 00:16:22.764 [job0] 00:16:22.764 filename=/dev/nvme0n1 00:16:22.764 [job1] 00:16:22.764 filename=/dev/nvme10n1 00:16:22.764 [job2] 00:16:22.764 filename=/dev/nvme1n1 00:16:22.764 [job3] 00:16:22.764 filename=/dev/nvme2n1 00:16:22.764 [job4] 00:16:22.764 filename=/dev/nvme3n1 00:16:22.764 [job5] 00:16:22.764 filename=/dev/nvme4n1 00:16:22.764 [job6] 00:16:22.764 filename=/dev/nvme5n1 00:16:22.764 [job7] 00:16:22.764 filename=/dev/nvme6n1 00:16:22.764 [job8] 00:16:22.764 filename=/dev/nvme7n1 00:16:22.764 [job9] 00:16:22.764 filename=/dev/nvme8n1 00:16:22.764 [job10] 00:16:22.764 filename=/dev/nvme9n1 00:16:22.764 Could not set queue depth (nvme0n1) 00:16:22.764 Could not set queue depth (nvme10n1) 00:16:22.764 Could not set queue depth (nvme1n1) 00:16:22.764 Could not set queue depth (nvme2n1) 00:16:22.764 Could not set queue depth (nvme3n1) 00:16:22.764 Could not set queue depth (nvme4n1) 00:16:22.764 Could not set queue depth (nvme5n1) 00:16:22.764 Could not set queue depth (nvme6n1) 00:16:22.764 Could not set queue depth (nvme7n1) 00:16:22.764 Could not set queue depth (nvme8n1) 00:16:22.764 Could not set queue depth (nvme9n1) 00:16:23.022 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.022 fio-3.35 00:16:23.022 Starting 11 threads 00:16:35.232 00:16:35.232 job0: (groupid=0, jobs=1): err= 0: pid=86195: Sat Dec 7 22:47:48 2024 00:16:35.232 read: IOPS=1283, BW=321MiB/s (337MB/s)(3217MiB/10024msec) 00:16:35.232 slat (usec): min=20, max=24008, avg=772.64, stdev=1694.95 00:16:35.232 clat (usec): min=21031, max=99333, avg=48986.39, stdev=4858.13 00:16:35.232 lat (usec): min=24373, max=99363, avg=49759.03, stdev=4843.06 00:16:35.232 clat percentiles (usec): 00:16:35.232 | 1.00th=[39584], 5.00th=[42206], 10.00th=[43779], 20.00th=[45351], 00:16:35.233 | 30.00th=[46400], 40.00th=[47973], 50.00th=[49021], 60.00th=[50070], 00:16:35.233 | 70.00th=[51119], 80.00th=[52691], 90.00th=[54264], 95.00th=[55313], 00:16:35.233 | 99.00th=[58459], 99.50th=[63701], 99.90th=[92799], 99.95th=[96994], 00:16:35.233 | 99.99th=[99091] 00:16:35.233 bw ( KiB/s): min=284729, max=347831, per=46.56%, avg=328066.40, stdev=13731.39, samples=20 00:16:35.233 iops : min= 1112, max= 1358, avg=1281.25, stdev=53.53, samples=20 00:16:35.233 lat (msec) : 50=59.97%, 100=40.03% 00:16:35.233 cpu : usr=0.68%, sys=4.46%, ctx=2427, majf=0, minf=4097 00:16:35.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:35.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.233 issued rwts: total=12869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.233 job1: (groupid=0, jobs=1): err= 0: pid=86196: Sat Dec 7 22:47:48 2024 00:16:35.233 read: IOPS=197, BW=49.4MiB/s (51.8MB/s)(500MiB/10119msec) 00:16:35.233 slat (usec): min=23, max=204269, avg=4997.11, stdev=12994.75 00:16:35.233 clat (msec): min=15, max=456, avg=318.36, stdev=55.00 00:16:35.233 lat (msec): min=17, max=456, avg=323.36, stdev=55.27 00:16:35.233 clat percentiles (msec): 00:16:35.233 | 1.00th=[ 93], 5.00th=[ 239], 10.00th=[ 266], 20.00th=[ 296], 00:16:35.233 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 326], 60.00th=[ 334], 00:16:35.233 | 70.00th=[ 342], 80.00th=[ 359], 90.00th=[ 376], 95.00th=[ 393], 00:16:35.233 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 456], 99.95th=[ 456], 00:16:35.233 | 99.99th=[ 456] 00:16:35.233 bw ( KiB/s): min=43520, max=53760, per=7.04%, avg=49587.20, stdev=2638.43, samples=20 00:16:35.233 iops : min= 170, max= 210, avg=193.70, stdev=10.31, samples=20 00:16:35.233 lat (msec) : 20=0.20%, 100=0.85%, 250=5.35%, 500=93.60% 00:16:35.233 cpu : usr=0.08%, sys=1.15%, ctx=381, majf=0, minf=4097 00:16:35.233 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:35.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.233 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.233 issued rwts: total=2000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.233 job2: (groupid=0, jobs=1): err= 0: pid=86197: Sat Dec 7 22:47:48 2024 00:16:35.233 read: IOPS=152, BW=38.0MiB/s (39.9MB/s)(386MiB/10159msec) 00:16:35.233 slat (usec): min=21, max=151683, avg=6491.68, stdev=16253.02 00:16:35.233 clat (msec): min=20, max=753, avg=413.52, stdev=135.42 00:16:35.233 lat (msec): min=21, max=753, avg=420.01, stdev=137.47 00:16:35.233 clat percentiles (msec): 00:16:35.233 | 1.00th=[ 42], 5.00th=[ 268], 10.00th=[ 292], 20.00th=[ 317], 00:16:35.233 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 363], 60.00th=[ 380], 00:16:35.233 | 70.00th=[ 510], 80.00th=[ 584], 90.00th=[ 617], 95.00th=[ 642], 00:16:35.233 | 99.00th=[ 667], 99.50th=[ 676], 99.90th=[ 693], 99.95th=[ 751], 00:16:35.233 | 99.99th=[ 751] 00:16:35.233 bw ( KiB/s): min=23552, max=50176, per=5.38%, avg=37918.30, stdev=10468.90, samples=20 00:16:35.233 iops : min= 92, max= 196, avg=148.10, stdev=40.88, samples=20 00:16:35.233 lat (msec) : 50=1.10%, 250=3.04%, 500=65.57%, 750=30.23%, 1000=0.06% 00:16:35.233 cpu : usr=0.09%, sys=0.81%, ctx=339, majf=0, minf=4097 00:16:35.233 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:16:35.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.233 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.233 issued rwts: total=1545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.233 job3: (groupid=0, jobs=1): err= 0: pid=86198: Sat Dec 7 22:47:48 2024 00:16:35.233 read: IOPS=147, BW=36.9MiB/s (38.7MB/s)(375MiB/10160msec) 00:16:35.233 slat (usec): min=21, max=213728, avg=6515.87, stdev=16850.14 00:16:35.233 clat (msec): min=12, max=751, avg=426.00, stdev=146.11 00:16:35.233 lat (msec): min=13, max=751, avg=432.52, stdev=147.94 00:16:35.233 clat percentiles (msec): 00:16:35.233 | 1.00th=[ 66], 5.00th=[ 180], 10.00th=[ 284], 20.00th=[ 313], 00:16:35.233 | 30.00th=[ 351], 40.00th=[ 376], 50.00th=[ 401], 60.00th=[ 426], 00:16:35.233 | 70.00th=[ 527], 80.00th=[ 584], 90.00th=[ 634], 95.00th=[ 667], 00:16:35.233 | 99.00th=[ 701], 99.50th=[ 743], 99.90th=[ 751], 99.95th=[ 751], 00:16:35.233 | 99.99th=[ 751] 00:16:35.233 bw ( KiB/s): min=22528, max=51200, per=5.22%, avg=36811.20, stdev=10167.25, samples=20 00:16:35.233 iops : min= 88, max= 200, avg=143.65, stdev=39.67, samples=20 00:16:35.233 lat (msec) : 20=0.67%, 100=2.40%, 250=2.60%, 500=63.16%, 750=30.91% 00:16:35.233 lat (msec) : 1000=0.27% 00:16:35.233 cpu : usr=0.11%, sys=0.58%, ctx=314, majf=0, minf=4097 00:16:35.233 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:16:35.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.233 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.233 issued rwts: total=1501,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.233 job4: (groupid=0, jobs=1): err= 0: pid=86199: Sat Dec 7 22:47:48 2024 00:16:35.233 read: IOPS=120, BW=30.2MiB/s (31.7MB/s)(306MiB/10125msec) 00:16:35.233 slat (usec): min=22, max=464719, avg=8158.98, stdev=26808.50 00:16:35.233 clat (msec): min=15, max=905, avg=520.10, stdev=167.44 00:16:35.233 lat (msec): min=16, max=1089, avg=528.25, stdev=170.22 00:16:35.233 clat percentiles (msec): 00:16:35.233 | 1.00th=[ 203], 5.00th=[ 243], 10.00th=[ 355], 20.00th=[ 405], 00:16:35.233 | 30.00th=[ 426], 40.00th=[ 439], 50.00th=[ 456], 60.00th=[ 498], 00:16:35.233 | 70.00th=[ 642], 80.00th=[ 684], 90.00th=[ 776], 95.00th=[ 818], 00:16:35.233 | 99.00th=[ 869], 99.50th=[ 885], 99.90th=[ 885], 99.95th=[ 902], 00:16:35.233 | 99.99th=[ 902] 00:16:35.233 bw ( KiB/s): min=14336, max=44120, per=4.22%, avg=29769.55, stdev=9205.29, samples=20 00:16:35.233 iops : min= 56, max= 172, avg=116.15, stdev=35.97, samples=20 00:16:35.233 lat (msec) : 20=0.41%, 250=4.90%, 500=54.86%, 750=27.76%, 1000=12.08% 00:16:35.233 cpu : usr=0.02%, sys=0.67%, ctx=223, majf=0, minf=4098 00:16:35.233 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.9% 00:16:35.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.233 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.233 issued rwts: total=1225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.233 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.233 job5: (groupid=0, jobs=1): err= 0: pid=86200: Sat Dec 7 22:47:48 2024 00:16:35.233 read: IOPS=197, BW=49.5MiB/s (51.9MB/s)(501MiB/10126msec) 00:16:35.233 slat (usec): min=20, max=186491, avg=4996.84, stdev=12632.67 00:16:35.233 clat (msec): min=22, max=433, avg=317.89, stdev=55.37 00:16:35.233 lat (msec): min=23, max=442, avg=322.88, stdev=55.79 00:16:35.233 clat percentiles (msec): 00:16:35.233 | 1.00th=[ 39], 5.00th=[ 247], 10.00th=[ 271], 20.00th=[ 292], 00:16:35.233 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 330], 00:16:35.233 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 376], 95.00th=[ 388], 00:16:35.233 | 99.00th=[ 426], 99.50th=[ 430], 99.90th=[ 435], 99.95th=[ 435], 00:16:35.233 | 99.99th=[ 435] 00:16:35.233 bw ( KiB/s): min=45477, max=55296, per=7.05%, avg=49694.40, stdev=2764.72, samples=20 00:16:35.233 iops : min= 177, max= 216, avg=193.95, stdev=10.82, samples=20 00:16:35.233 lat (msec) : 50=1.65%, 250=3.69%, 500=94.66% 00:16:35.233 cpu : usr=0.14%, sys=0.87%, ctx=422, majf=0, minf=4097 00:16:35.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.234 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.234 job6: (groupid=0, jobs=1): err= 0: pid=86201: Sat Dec 7 22:47:48 2024 00:16:35.234 read: IOPS=118, BW=29.7MiB/s (31.2MB/s)(301MiB/10123msec) 00:16:35.234 slat (usec): min=21, max=306822, avg=8307.91, stdev=25610.56 00:16:35.234 clat (msec): min=21, max=1020, avg=529.33, stdev=207.25 00:16:35.234 lat (msec): min=21, max=1038, avg=537.63, stdev=209.45 00:16:35.234 clat percentiles (msec): 00:16:35.234 | 1.00th=[ 53], 5.00th=[ 171], 10.00th=[ 338], 20.00th=[ 397], 00:16:35.234 | 30.00th=[ 422], 40.00th=[ 439], 50.00th=[ 464], 60.00th=[ 510], 00:16:35.234 | 70.00th=[ 634], 80.00th=[ 726], 90.00th=[ 844], 95.00th=[ 911], 00:16:35.234 | 99.00th=[ 961], 99.50th=[ 978], 99.90th=[ 1020], 99.95th=[ 1020], 00:16:35.234 | 99.99th=[ 1020] 00:16:35.234 bw ( KiB/s): min=12288, max=42496, per=4.14%, avg=29184.00, stdev=9196.52, samples=20 00:16:35.234 iops : min= 48, max= 166, avg=114.00, stdev=35.92, samples=20 00:16:35.234 lat (msec) : 50=0.67%, 100=1.58%, 250=4.41%, 500=51.37%, 750=23.28% 00:16:35.234 lat (msec) : 1000=18.54%, 2000=0.17% 00:16:35.234 cpu : usr=0.03%, sys=0.51%, ctx=218, majf=0, minf=4097 00:16:35.234 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.8% 00:16:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.234 issued rwts: total=1203,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.234 job7: (groupid=0, jobs=1): err= 0: pid=86202: Sat Dec 7 22:47:48 2024 00:16:35.234 read: IOPS=197, BW=49.4MiB/s (51.8MB/s)(499MiB/10114msec) 00:16:35.234 slat (usec): min=21, max=140538, avg=5009.01, stdev=12446.23 00:16:35.234 clat (msec): min=75, max=441, avg=318.74, stdev=45.62 00:16:35.234 lat (msec): min=75, max=442, avg=323.75, stdev=45.94 00:16:35.234 clat percentiles (msec): 00:16:35.234 | 1.00th=[ 128], 5.00th=[ 247], 10.00th=[ 271], 20.00th=[ 292], 00:16:35.234 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 321], 60.00th=[ 330], 00:16:35.234 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 372], 95.00th=[ 384], 00:16:35.234 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 443], 00:16:35.234 | 99.99th=[ 443] 00:16:35.234 bw ( KiB/s): min=39936, max=53248, per=7.02%, avg=49474.85, stdev=3290.34, samples=20 00:16:35.234 iops : min= 156, max= 208, avg=193.20, stdev=12.86, samples=20 00:16:35.234 lat (msec) : 100=0.25%, 250=5.81%, 500=93.94% 00:16:35.234 cpu : usr=0.11%, sys=1.10%, ctx=395, majf=0, minf=4097 00:16:35.234 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:16:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.234 issued rwts: total=1997,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.234 job8: (groupid=0, jobs=1): err= 0: pid=86203: Sat Dec 7 22:47:48 2024 00:16:35.234 read: IOPS=139, BW=34.8MiB/s (36.5MB/s)(353MiB/10153msec) 00:16:35.234 slat (usec): min=20, max=204845, avg=7086.46, stdev=18698.11 00:16:35.234 clat (msec): min=82, max=777, avg=452.18, stdev=137.20 00:16:35.234 lat (msec): min=82, max=777, avg=459.27, stdev=138.57 00:16:35.234 clat percentiles (msec): 00:16:35.234 | 1.00th=[ 203], 5.00th=[ 271], 10.00th=[ 288], 20.00th=[ 326], 00:16:35.234 | 30.00th=[ 372], 40.00th=[ 409], 50.00th=[ 435], 60.00th=[ 456], 00:16:35.234 | 70.00th=[ 518], 80.00th=[ 600], 90.00th=[ 659], 95.00th=[ 693], 00:16:35.234 | 99.00th=[ 743], 99.50th=[ 760], 99.90th=[ 776], 99.95th=[ 776], 00:16:35.234 | 99.99th=[ 776] 00:16:35.234 bw ( KiB/s): min=20992, max=45568, per=4.90%, avg=34534.40, stdev=8500.26, samples=20 00:16:35.234 iops : min= 82, max= 178, avg=134.90, stdev=33.20, samples=20 00:16:35.234 lat (msec) : 100=0.28%, 250=2.12%, 500=65.96%, 750=30.79%, 1000=0.85% 00:16:35.234 cpu : usr=0.08%, sys=0.65%, ctx=277, majf=0, minf=4097 00:16:35.234 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.234 issued rwts: total=1413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.234 job9: (groupid=0, jobs=1): err= 0: pid=86205: Sat Dec 7 22:47:48 2024 00:16:35.234 read: IOPS=119, BW=30.0MiB/s (31.4MB/s)(303MiB/10122msec) 00:16:35.234 slat (usec): min=13, max=306566, avg=8199.31, stdev=26996.63 00:16:35.234 clat (msec): min=15, max=1023, avg=524.76, stdev=213.81 00:16:35.234 lat (msec): min=15, max=1023, avg=532.96, stdev=216.32 00:16:35.234 clat percentiles (msec): 00:16:35.234 | 1.00th=[ 22], 5.00th=[ 133], 10.00th=[ 372], 20.00th=[ 405], 00:16:35.234 | 30.00th=[ 418], 40.00th=[ 435], 50.00th=[ 456], 60.00th=[ 481], 00:16:35.234 | 70.00th=[ 592], 80.00th=[ 735], 90.00th=[ 844], 95.00th=[ 927], 00:16:35.234 | 99.00th=[ 1003], 99.50th=[ 1003], 99.90th=[ 1020], 99.95th=[ 1020], 00:16:35.234 | 99.99th=[ 1020] 00:16:35.234 bw ( KiB/s): min= 8192, max=41900, per=4.17%, avg=29410.20, stdev=10548.56, samples=20 00:16:35.234 iops : min= 32, max= 163, avg=114.85, stdev=41.16, samples=20 00:16:35.234 lat (msec) : 20=0.91%, 50=1.57%, 100=2.31%, 250=1.32%, 500=56.88% 00:16:35.234 lat (msec) : 750=17.56%, 1000=18.30%, 2000=1.15% 00:16:35.234 cpu : usr=0.05%, sys=0.62%, ctx=271, majf=0, minf=4097 00:16:35.234 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.6%, >=64=94.8% 00:16:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.234 issued rwts: total=1213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.234 job10: (groupid=0, jobs=1): err= 0: pid=86208: Sat Dec 7 22:47:48 2024 00:16:35.234 read: IOPS=98, BW=24.5MiB/s (25.7MB/s)(249MiB/10151msec) 00:16:35.234 slat (usec): min=17, max=376026, avg=9516.33, stdev=28462.34 00:16:35.234 clat (msec): min=144, max=1011, avg=641.86, stdev=123.55 00:16:35.234 lat (msec): min=217, max=1011, avg=651.38, stdev=124.75 00:16:35.234 clat percentiles (msec): 00:16:35.234 | 1.00th=[ 292], 5.00th=[ 426], 10.00th=[ 527], 20.00th=[ 584], 00:16:35.234 | 30.00th=[ 600], 40.00th=[ 617], 50.00th=[ 634], 60.00th=[ 651], 00:16:35.234 | 70.00th=[ 684], 80.00th=[ 718], 90.00th=[ 785], 95.00th=[ 885], 00:16:35.234 | 99.00th=[ 978], 99.50th=[ 978], 99.90th=[ 1011], 99.95th=[ 1011], 00:16:35.234 | 99.99th=[ 1011] 00:16:35.234 bw ( KiB/s): min= 2052, max=31744, per=3.39%, avg=23885.00, stdev=7070.99, samples=20 00:16:35.234 iops : min= 8, max= 124, avg=93.30, stdev=27.62, samples=20 00:16:35.234 lat (msec) : 250=0.70%, 500=8.13%, 750=78.11%, 1000=12.95%, 2000=0.10% 00:16:35.234 cpu : usr=0.04%, sys=0.48%, ctx=226, majf=0, minf=4097 00:16:35.234 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:16:35.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.234 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:35.234 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:35.234 00:16:35.234 Run status group 0 (all jobs): 00:16:35.235 READ: bw=688MiB/s (722MB/s), 24.5MiB/s-321MiB/s (25.7MB/s-337MB/s), io=6992MiB (7331MB), run=10024-10160msec 00:16:35.235 00:16:35.235 Disk stats (read/write): 00:16:35.235 nvme0n1: ios=25620/0, merge=0/0, ticks=1241104/0, in_queue=1241104, util=97.66% 00:16:35.235 nvme10n1: ios=3875/0, merge=0/0, ticks=1223468/0, in_queue=1223468, util=97.89% 00:16:35.235 nvme1n1: ios=2963/0, merge=0/0, ticks=1208770/0, in_queue=1208770, util=98.02% 00:16:35.235 nvme2n1: ios=2874/0, merge=0/0, ticks=1209948/0, in_queue=1209948, util=98.21% 00:16:35.235 nvme3n1: ios=2333/0, merge=0/0, ticks=1223053/0, in_queue=1223053, util=98.29% 00:16:35.235 nvme4n1: ios=3880/0, merge=0/0, ticks=1227285/0, in_queue=1227285, util=98.43% 00:16:35.235 nvme5n1: ios=2289/0, merge=0/0, ticks=1222223/0, in_queue=1222223, util=98.51% 00:16:35.235 nvme6n1: ios=3862/0, merge=0/0, ticks=1225538/0, in_queue=1225538, util=98.51% 00:16:35.235 nvme7n1: ios=2699/0, merge=0/0, ticks=1206257/0, in_queue=1206257, util=98.82% 00:16:35.235 nvme8n1: ios=2299/0, merge=0/0, ticks=1217618/0, in_queue=1217618, util=99.01% 00:16:35.235 nvme9n1: ios=1865/0, merge=0/0, ticks=1202839/0, in_queue=1202839, util=99.04% 00:16:35.235 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:35.235 [global] 00:16:35.235 thread=1 00:16:35.235 invalidate=1 00:16:35.235 rw=randwrite 00:16:35.235 time_based=1 00:16:35.235 runtime=10 00:16:35.235 ioengine=libaio 00:16:35.235 direct=1 00:16:35.235 bs=262144 00:16:35.235 iodepth=64 00:16:35.235 norandommap=1 00:16:35.235 numjobs=1 00:16:35.235 00:16:35.235 [job0] 00:16:35.235 filename=/dev/nvme0n1 00:16:35.235 [job1] 00:16:35.235 filename=/dev/nvme10n1 00:16:35.235 [job2] 00:16:35.235 filename=/dev/nvme1n1 00:16:35.235 [job3] 00:16:35.235 filename=/dev/nvme2n1 00:16:35.235 [job4] 00:16:35.235 filename=/dev/nvme3n1 00:16:35.235 [job5] 00:16:35.235 filename=/dev/nvme4n1 00:16:35.235 [job6] 00:16:35.235 filename=/dev/nvme5n1 00:16:35.235 [job7] 00:16:35.235 filename=/dev/nvme6n1 00:16:35.235 [job8] 00:16:35.235 filename=/dev/nvme7n1 00:16:35.235 [job9] 00:16:35.235 filename=/dev/nvme8n1 00:16:35.235 [job10] 00:16:35.235 filename=/dev/nvme9n1 00:16:35.235 Could not set queue depth (nvme0n1) 00:16:35.235 Could not set queue depth (nvme10n1) 00:16:35.235 Could not set queue depth (nvme1n1) 00:16:35.235 Could not set queue depth (nvme2n1) 00:16:35.235 Could not set queue depth (nvme3n1) 00:16:35.235 Could not set queue depth (nvme4n1) 00:16:35.235 Could not set queue depth (nvme5n1) 00:16:35.235 Could not set queue depth (nvme6n1) 00:16:35.235 Could not set queue depth (nvme7n1) 00:16:35.235 Could not set queue depth (nvme8n1) 00:16:35.235 Could not set queue depth (nvme9n1) 00:16:35.235 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:35.235 fio-3.35 00:16:35.235 Starting 11 threads 00:16:45.219 00:16:45.219 job0: (groupid=0, jobs=1): err= 0: pid=86405: Sat Dec 7 22:47:58 2024 00:16:45.219 write: IOPS=241, BW=60.3MiB/s (63.3MB/s)(614MiB/10178msec); 0 zone resets 00:16:45.219 slat (usec): min=19, max=135871, avg=4071.28, stdev=7517.03 00:16:45.219 clat (msec): min=137, max=425, avg=261.03, stdev=18.81 00:16:45.219 lat (msec): min=137, max=425, avg=265.11, stdev=17.68 00:16:45.219 clat percentiles (msec): 00:16:45.219 | 1.00th=[ 192], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:16:45.220 | 30.00th=[ 259], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 264], 00:16:45.220 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 275], 00:16:45.220 | 99.00th=[ 342], 99.50th=[ 376], 99.90th=[ 409], 99.95th=[ 426], 00:16:45.220 | 99.99th=[ 426] 00:16:45.220 bw ( KiB/s): min=49152, max=63488, per=7.57%, avg=61248.30, stdev=3217.51, samples=20 00:16:45.220 iops : min= 192, max= 248, avg=239.20, stdev=12.55, samples=20 00:16:45.220 lat (msec) : 250=18.65%, 500=81.35% 00:16:45.220 cpu : usr=0.44%, sys=0.79%, ctx=2276, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:16:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.220 issued rwts: total=0,2456,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.220 job1: (groupid=0, jobs=1): err= 0: pid=86406: Sat Dec 7 22:47:58 2024 00:16:45.220 write: IOPS=180, BW=45.0MiB/s (47.2MB/s)(460MiB/10217msec); 0 zone resets 00:16:45.220 slat (usec): min=16, max=107808, avg=5372.32, stdev=9915.95 00:16:45.220 clat (msec): min=109, max=553, avg=349.85, stdev=36.22 00:16:45.220 lat (msec): min=109, max=554, avg=355.22, stdev=35.66 00:16:45.220 clat percentiles (msec): 00:16:45.220 | 1.00th=[ 171], 5.00th=[ 309], 10.00th=[ 330], 20.00th=[ 338], 00:16:45.220 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 359], 00:16:45.220 | 70.00th=[ 363], 80.00th=[ 363], 90.00th=[ 372], 95.00th=[ 376], 00:16:45.220 | 99.00th=[ 443], 99.50th=[ 510], 99.90th=[ 558], 99.95th=[ 558], 00:16:45.220 | 99.99th=[ 558] 00:16:45.220 bw ( KiB/s): min=40878, max=47104, per=5.62%, avg=45461.50, stdev=1584.82, samples=20 00:16:45.220 iops : min= 159, max= 184, avg=177.55, stdev= 6.30, samples=20 00:16:45.220 lat (msec) : 250=2.50%, 500=96.96%, 750=0.54% 00:16:45.220 cpu : usr=0.35%, sys=0.60%, ctx=1857, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.220 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.220 issued rwts: total=0,1840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.220 job2: (groupid=0, jobs=1): err= 0: pid=86418: Sat Dec 7 22:47:58 2024 00:16:45.220 write: IOPS=244, BW=61.0MiB/s (64.0MB/s)(621MiB/10173msec); 0 zone resets 00:16:45.220 slat (usec): min=18, max=28323, avg=4020.73, stdev=7029.96 00:16:45.220 clat (msec): min=28, max=433, avg=257.97, stdev=28.54 00:16:45.220 lat (msec): min=28, max=433, avg=261.99, stdev=28.19 00:16:45.220 clat percentiles (msec): 00:16:45.220 | 1.00th=[ 106], 5.00th=[ 241], 10.00th=[ 245], 20.00th=[ 249], 00:16:45.220 | 30.00th=[ 257], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 264], 00:16:45.220 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 271], 00:16:45.220 | 99.00th=[ 334], 99.50th=[ 384], 99.90th=[ 418], 99.95th=[ 435], 00:16:45.220 | 99.99th=[ 435] 00:16:45.220 bw ( KiB/s): min=59392, max=67449, per=7.66%, avg=61970.85, stdev=1927.37, samples=20 00:16:45.220 iops : min= 232, max= 263, avg=242.05, stdev= 7.46, samples=20 00:16:45.220 lat (msec) : 50=0.32%, 100=0.64%, 250=20.49%, 500=78.54% 00:16:45.220 cpu : usr=0.42%, sys=0.72%, ctx=2677, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:16:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.220 issued rwts: total=0,2484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.220 job3: (groupid=0, jobs=1): err= 0: pid=86419: Sat Dec 7 22:47:58 2024 00:16:45.220 write: IOPS=655, BW=164MiB/s (172MB/s)(1653MiB/10078msec); 0 zone resets 00:16:45.220 slat (usec): min=16, max=11478, avg=1507.08, stdev=2564.63 00:16:45.220 clat (msec): min=13, max=173, avg=96.04, stdev= 6.88 00:16:45.220 lat (msec): min=13, max=173, avg=97.55, stdev= 6.47 00:16:45.220 clat percentiles (msec): 00:16:45.220 | 1.00th=[ 90], 5.00th=[ 91], 10.00th=[ 91], 20.00th=[ 92], 00:16:45.220 | 30.00th=[ 96], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 97], 00:16:45.220 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 99], 95.00th=[ 100], 00:16:45.220 | 99.00th=[ 124], 99.50th=[ 136], 99.90th=[ 163], 99.95th=[ 169], 00:16:45.220 | 99.99th=[ 174] 00:16:45.220 bw ( KiB/s): min=151552, max=169984, per=20.71%, avg=167586.35, stdev=4164.13, samples=20 00:16:45.220 iops : min= 592, max= 664, avg=654.60, stdev=16.26, samples=20 00:16:45.220 lat (msec) : 20=0.06%, 50=0.24%, 100=96.40%, 250=3.30% 00:16:45.220 cpu : usr=1.10%, sys=1.69%, ctx=7777, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.220 issued rwts: total=0,6610,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.220 job4: (groupid=0, jobs=1): err= 0: pid=86420: Sat Dec 7 22:47:58 2024 00:16:45.220 write: IOPS=176, BW=44.2MiB/s (46.4MB/s)(452MiB/10221msec); 0 zone resets 00:16:45.220 slat (usec): min=17, max=115807, avg=5533.32, stdev=10262.97 00:16:45.220 clat (msec): min=40, max=563, avg=356.10, stdev=48.83 00:16:45.220 lat (msec): min=40, max=563, avg=361.64, stdev=48.67 00:16:45.220 clat percentiles (msec): 00:16:45.220 | 1.00th=[ 100], 5.00th=[ 321], 10.00th=[ 338], 20.00th=[ 342], 00:16:45.220 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 363], 00:16:45.220 | 70.00th=[ 368], 80.00th=[ 380], 90.00th=[ 388], 95.00th=[ 397], 00:16:45.220 | 99.00th=[ 451], 99.50th=[ 518], 99.90th=[ 567], 99.95th=[ 567], 00:16:45.220 | 99.99th=[ 567] 00:16:45.220 bw ( KiB/s): min=40960, max=47104, per=5.52%, avg=44667.25, stdev=1575.99, samples=20 00:16:45.220 iops : min= 160, max= 184, avg=174.45, stdev= 6.11, samples=20 00:16:45.220 lat (msec) : 50=0.22%, 100=0.88%, 250=2.54%, 500=95.80%, 750=0.55% 00:16:45.220 cpu : usr=0.30%, sys=0.56%, ctx=639, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.220 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.220 issued rwts: total=0,1808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.220 job5: (groupid=0, jobs=1): err= 0: pid=86421: Sat Dec 7 22:47:58 2024 00:16:45.220 write: IOPS=663, BW=166MiB/s (174MB/s)(1674MiB/10083msec); 0 zone resets 00:16:45.220 slat (usec): min=17, max=12120, avg=1480.79, stdev=2530.78 00:16:45.220 clat (msec): min=14, max=178, avg=94.89, stdev= 9.26 00:16:45.220 lat (msec): min=14, max=178, avg=96.37, stdev= 9.08 00:16:45.220 clat percentiles (msec): 00:16:45.220 | 1.00th=[ 54], 5.00th=[ 90], 10.00th=[ 91], 20.00th=[ 92], 00:16:45.220 | 30.00th=[ 95], 40.00th=[ 96], 50.00th=[ 97], 60.00th=[ 97], 00:16:45.220 | 70.00th=[ 97], 80.00th=[ 99], 90.00th=[ 99], 95.00th=[ 100], 00:16:45.220 | 99.00th=[ 108], 99.50th=[ 131], 99.90th=[ 167], 99.95th=[ 174], 00:16:45.220 | 99.99th=[ 180] 00:16:45.220 bw ( KiB/s): min=162816, max=195193, per=20.97%, avg=169700.10, stdev=6307.19, samples=20 00:16:45.220 iops : min= 636, max= 762, avg=662.80, stdev=24.54, samples=20 00:16:45.220 lat (msec) : 20=0.16%, 50=0.70%, 100=96.98%, 250=2.15% 00:16:45.220 cpu : usr=1.02%, sys=1.90%, ctx=8086, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.220 issued rwts: total=0,6694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.220 job6: (groupid=0, jobs=1): err= 0: pid=86422: Sat Dec 7 22:47:58 2024 00:16:45.220 write: IOPS=247, BW=61.8MiB/s (64.8MB/s)(629MiB/10178msec); 0 zone resets 00:16:45.220 slat (usec): min=19, max=35972, avg=3919.03, stdev=6970.65 00:16:45.220 clat (msec): min=7, max=439, avg=254.75, stdev=37.82 00:16:45.220 lat (msec): min=7, max=439, avg=258.67, stdev=37.90 00:16:45.220 clat percentiles (msec): 00:16:45.220 | 1.00th=[ 45], 5.00th=[ 222], 10.00th=[ 245], 20.00th=[ 249], 00:16:45.220 | 30.00th=[ 255], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 264], 00:16:45.220 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 271], 00:16:45.220 | 99.00th=[ 342], 99.50th=[ 393], 99.90th=[ 426], 99.95th=[ 439], 00:16:45.220 | 99.99th=[ 439] 00:16:45.220 bw ( KiB/s): min=59273, max=80384, per=7.76%, avg=62816.45, stdev=4412.79, samples=20 00:16:45.220 iops : min= 231, max= 314, avg=245.35, stdev=17.26, samples=20 00:16:45.220 lat (msec) : 10=0.08%, 20=0.48%, 50=0.56%, 100=0.87%, 250=20.62% 00:16:45.220 lat (msec) : 500=77.39% 00:16:45.220 cpu : usr=0.47%, sys=0.71%, ctx=2596, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:16:45.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.220 issued rwts: total=0,2517,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.220 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.220 job7: (groupid=0, jobs=1): err= 0: pid=86423: Sat Dec 7 22:47:58 2024 00:16:45.220 write: IOPS=176, BW=44.2MiB/s (46.4MB/s)(452MiB/10220msec); 0 zone resets 00:16:45.220 slat (usec): min=17, max=193631, avg=5529.80, stdev=10680.92 00:16:45.220 clat (msec): min=199, max=569, avg=356.05, stdev=29.25 00:16:45.220 lat (msec): min=199, max=569, avg=361.58, stdev=27.92 00:16:45.220 clat percentiles (msec): 00:16:45.220 | 1.00th=[ 247], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 342], 00:16:45.220 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 359], 00:16:45.220 | 70.00th=[ 363], 80.00th=[ 368], 90.00th=[ 376], 95.00th=[ 384], 00:16:45.220 | 99.00th=[ 460], 99.50th=[ 527], 99.90th=[ 567], 99.95th=[ 567], 00:16:45.220 | 99.99th=[ 567] 00:16:45.220 bw ( KiB/s): min=34885, max=47104, per=5.52%, avg=44675.45, stdev=2859.34, samples=20 00:16:45.220 iops : min= 136, max= 184, avg=174.50, stdev=11.22, samples=20 00:16:45.220 lat (msec) : 250=1.16%, 500=98.06%, 750=0.77% 00:16:45.220 cpu : usr=0.37%, sys=0.46%, ctx=2082, majf=0, minf=1 00:16:45.220 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.221 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.221 issued rwts: total=0,1808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.221 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.221 job8: (groupid=0, jobs=1): err= 0: pid=86424: Sat Dec 7 22:47:58 2024 00:16:45.221 write: IOPS=174, BW=43.6MiB/s (45.7MB/s)(445MiB/10213msec); 0 zone resets 00:16:45.221 slat (usec): min=17, max=260195, avg=5617.52, stdev=11619.65 00:16:45.221 clat (msec): min=201, max=561, avg=361.43, stdev=30.82 00:16:45.221 lat (msec): min=224, max=561, avg=367.04, stdev=29.32 00:16:45.221 clat percentiles (msec): 00:16:45.221 | 1.00th=[ 271], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 342], 00:16:45.221 | 30.00th=[ 355], 40.00th=[ 359], 50.00th=[ 359], 60.00th=[ 363], 00:16:45.221 | 70.00th=[ 363], 80.00th=[ 372], 90.00th=[ 393], 95.00th=[ 401], 00:16:45.221 | 99.00th=[ 493], 99.50th=[ 518], 99.90th=[ 558], 99.95th=[ 558], 00:16:45.221 | 99.99th=[ 558] 00:16:45.221 bw ( KiB/s): min=28672, max=47104, per=5.43%, avg=43955.20, stdev=4183.57, samples=20 00:16:45.221 iops : min= 112, max= 184, avg=171.70, stdev=16.34, samples=20 00:16:45.221 lat (msec) : 250=0.51%, 500=98.54%, 750=0.96% 00:16:45.221 cpu : usr=0.33%, sys=0.54%, ctx=1808, majf=0, minf=1 00:16:45.221 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:16:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.221 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.221 issued rwts: total=0,1780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.221 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.221 job9: (groupid=0, jobs=1): err= 0: pid=86425: Sat Dec 7 22:47:58 2024 00:16:45.221 write: IOPS=180, BW=45.0MiB/s (47.2MB/s)(460MiB/10218msec); 0 zone resets 00:16:45.221 slat (usec): min=17, max=71457, avg=5433.80, stdev=9756.49 00:16:45.221 clat (msec): min=38, max=567, avg=349.82, stdev=48.78 00:16:45.221 lat (msec): min=38, max=567, avg=355.25, stdev=48.72 00:16:45.221 clat percentiles (msec): 00:16:45.221 | 1.00th=[ 99], 5.00th=[ 296], 10.00th=[ 321], 20.00th=[ 338], 00:16:45.221 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 363], 00:16:45.221 | 70.00th=[ 368], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 388], 00:16:45.221 | 99.00th=[ 456], 99.50th=[ 523], 99.90th=[ 567], 99.95th=[ 567], 00:16:45.221 | 99.99th=[ 567] 00:16:45.221 bw ( KiB/s): min=43008, max=51200, per=5.62%, avg=45496.10, stdev=2269.52, samples=20 00:16:45.221 iops : min= 168, max= 200, avg=177.70, stdev= 8.83, samples=20 00:16:45.221 lat (msec) : 50=0.22%, 100=0.87%, 250=2.45%, 500=95.71%, 750=0.76% 00:16:45.221 cpu : usr=0.30%, sys=0.56%, ctx=2083, majf=0, minf=1 00:16:45.221 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:16:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.221 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.221 issued rwts: total=0,1840,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.221 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.221 job10: (groupid=0, jobs=1): err= 0: pid=86426: Sat Dec 7 22:47:58 2024 00:16:45.221 write: IOPS=243, BW=60.8MiB/s (63.8MB/s)(619MiB/10178msec); 0 zone resets 00:16:45.221 slat (usec): min=21, max=63986, avg=4036.03, stdev=7118.83 00:16:45.221 clat (msec): min=22, max=431, avg=258.93, stdev=29.08 00:16:45.221 lat (msec): min=22, max=431, avg=262.97, stdev=28.74 00:16:45.221 clat percentiles (msec): 00:16:45.221 | 1.00th=[ 97], 5.00th=[ 245], 10.00th=[ 247], 20.00th=[ 251], 00:16:45.221 | 30.00th=[ 259], 40.00th=[ 262], 50.00th=[ 264], 60.00th=[ 266], 00:16:45.221 | 70.00th=[ 266], 80.00th=[ 268], 90.00th=[ 271], 95.00th=[ 275], 00:16:45.221 | 99.00th=[ 334], 99.50th=[ 384], 99.90th=[ 418], 99.95th=[ 430], 00:16:45.221 | 99.99th=[ 430] 00:16:45.221 bw ( KiB/s): min=59392, max=64000, per=7.63%, avg=61772.80, stdev=1212.47, samples=20 00:16:45.221 iops : min= 232, max= 250, avg=241.30, stdev= 4.74, samples=20 00:16:45.221 lat (msec) : 50=0.44%, 100=0.65%, 250=16.84%, 500=82.07% 00:16:45.221 cpu : usr=0.47%, sys=0.79%, ctx=2153, majf=0, minf=1 00:16:45.221 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:16:45.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:45.221 issued rwts: total=0,2476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.221 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:45.221 00:16:45.221 Run status group 0 (all jobs): 00:16:45.221 WRITE: bw=790MiB/s (829MB/s), 43.6MiB/s-166MiB/s (45.7MB/s-174MB/s), io=8078MiB (8471MB), run=10078-10221msec 00:16:45.221 00:16:45.221 Disk stats (read/write): 00:16:45.221 nvme0n1: ios=50/4775, merge=0/0, ticks=33/1207017, in_queue=1207050, util=97.77% 00:16:45.221 nvme10n1: ios=49/3545, merge=0/0, ticks=86/1199955, in_queue=1200041, util=98.17% 00:16:45.221 nvme1n1: ios=40/4836, merge=0/0, ticks=49/1205601, in_queue=1205650, util=98.04% 00:16:45.221 nvme2n1: ios=29/13063, merge=0/0, ticks=38/1214019, in_queue=1214057, util=98.02% 00:16:45.221 nvme3n1: ios=22/3484, merge=0/0, ticks=108/1199972, in_queue=1200080, util=98.28% 00:16:45.221 nvme4n1: ios=0/13242, merge=0/0, ticks=0/1215478, in_queue=1215478, util=98.26% 00:16:45.221 nvme5n1: ios=0/4908, merge=0/0, ticks=0/1208176, in_queue=1208176, util=98.42% 00:16:45.221 nvme6n1: ios=0/3486, merge=0/0, ticks=0/1200942, in_queue=1200942, util=98.41% 00:16:45.221 nvme7n1: ios=0/3428, merge=0/0, ticks=0/1200447, in_queue=1200447, util=98.65% 00:16:45.221 nvme8n1: ios=0/3550, merge=0/0, ticks=0/1200577, in_queue=1200577, util=98.84% 00:16:45.221 nvme9n1: ios=0/4819, merge=0/0, ticks=0/1206186, in_queue=1206186, util=98.93% 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.221 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:45.221 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:45.221 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:45.221 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.221 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:45.222 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:45.222 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:45.222 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:45.222 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:45.222 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:45.222 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:45.222 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:45.222 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:45.223 rmmod nvme_tcp 00:16:45.223 rmmod nvme_fabrics 00:16:45.223 rmmod nvme_keyring 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 85740 ']' 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 85740 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 85740 ']' 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 85740 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85740 00:16:45.223 killing process with pid 85740 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85740' 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 85740 00:16:45.223 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 85740 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:45.793 00:16:45.793 real 0m49.217s 00:16:45.793 user 2m49.256s 00:16:45.793 sys 0m25.403s 00:16:45.793 ************************************ 00:16:45.793 END TEST nvmf_multiconnection 00:16:45.793 ************************************ 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:45.793 22:48:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.054 ************************************ 00:16:46.054 START TEST nvmf_initiator_timeout 00:16:46.054 ************************************ 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:46.054 * Looking for test storage... 00:16:46.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:46.054 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.055 --rc genhtml_branch_coverage=1 00:16:46.055 --rc genhtml_function_coverage=1 00:16:46.055 --rc genhtml_legend=1 00:16:46.055 --rc geninfo_all_blocks=1 00:16:46.055 --rc geninfo_unexecuted_blocks=1 00:16:46.055 00:16:46.055 ' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.055 --rc genhtml_branch_coverage=1 00:16:46.055 --rc genhtml_function_coverage=1 00:16:46.055 --rc genhtml_legend=1 00:16:46.055 --rc geninfo_all_blocks=1 00:16:46.055 --rc geninfo_unexecuted_blocks=1 00:16:46.055 00:16:46.055 ' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.055 --rc genhtml_branch_coverage=1 00:16:46.055 --rc genhtml_function_coverage=1 00:16:46.055 --rc genhtml_legend=1 00:16:46.055 --rc geninfo_all_blocks=1 00:16:46.055 --rc geninfo_unexecuted_blocks=1 00:16:46.055 00:16:46.055 ' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:46.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:46.055 --rc genhtml_branch_coverage=1 00:16:46.055 --rc genhtml_function_coverage=1 00:16:46.055 --rc genhtml_legend=1 00:16:46.055 --rc geninfo_all_blocks=1 00:16:46.055 --rc geninfo_unexecuted_blocks=1 00:16:46.055 00:16:46.055 ' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:46.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:46.055 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:46.056 Cannot find device "nvmf_init_br" 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:46.056 Cannot find device "nvmf_init_br2" 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:46.056 Cannot find device "nvmf_tgt_br" 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:46.056 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.315 Cannot find device "nvmf_tgt_br2" 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:46.315 Cannot find device "nvmf_init_br" 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:46.315 Cannot find device "nvmf_init_br2" 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:46.315 Cannot find device "nvmf_tgt_br" 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:46.315 Cannot find device "nvmf_tgt_br2" 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:46.315 Cannot find device "nvmf_br" 00:16:46.315 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:46.316 Cannot find device "nvmf_init_if" 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:46.316 Cannot find device "nvmf_init_if2" 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:46.316 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:46.316 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:46.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:46.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:16:46.576 00:16:46.576 --- 10.0.0.3 ping statistics --- 00:16:46.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.576 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:46.576 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:46.576 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:16:46.576 00:16:46.576 --- 10.0.0.4 ping statistics --- 00:16:46.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.576 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:46.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:46.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:46.576 00:16:46.576 --- 10.0.0.1 ping statistics --- 00:16:46.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.576 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:46.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:46.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:16:46.576 00:16:46.576 --- 10.0.0.2 ping statistics --- 00:16:46.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:46.576 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=86840 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 86840 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 86840 ']' 00:16:46.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:46.576 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.576 [2024-12-07 22:48:01.257352] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:16:46.577 [2024-12-07 22:48:01.257641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:46.836 [2024-12-07 22:48:01.391275] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:46.836 [2024-12-07 22:48:01.430179] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:46.836 [2024-12-07 22:48:01.430774] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:46.836 [2024-12-07 22:48:01.430964] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:46.836 [2024-12-07 22:48:01.431119] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:46.836 [2024-12-07 22:48:01.431224] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:46.836 [2024-12-07 22:48:01.431540] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.836 [2024-12-07 22:48:01.431784] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:46.836 [2024-12-07 22:48:01.432321] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:46.836 [2024-12-07 22:48:01.432335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.836 [2024-12-07 22:48:01.466623] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.836 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.095 Malloc0 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.095 Delay0 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.095 [2024-12-07 22:48:01.636084] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.095 [2024-12-07 22:48:01.665309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.095 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:47.096 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.096 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:16:47.096 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.096 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:47.096 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=86901 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:49.629 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:49.629 [global] 00:16:49.629 thread=1 00:16:49.629 invalidate=1 00:16:49.629 rw=write 00:16:49.629 time_based=1 00:16:49.629 runtime=60 00:16:49.629 ioengine=libaio 00:16:49.629 direct=1 00:16:49.629 bs=4096 00:16:49.629 iodepth=1 00:16:49.629 norandommap=0 00:16:49.629 numjobs=1 00:16:49.629 00:16:49.629 verify_dump=1 00:16:49.629 verify_backlog=512 00:16:49.629 verify_state_save=0 00:16:49.629 do_verify=1 00:16:49.629 verify=crc32c-intel 00:16:49.629 [job0] 00:16:49.629 filename=/dev/nvme0n1 00:16:49.629 Could not set queue depth (nvme0n1) 00:16:49.629 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:49.629 fio-3.35 00:16:49.629 Starting 1 thread 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.160 true 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.160 true 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.160 true 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.160 true 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.160 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 true 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 true 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 true 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:55.449 true 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:55.449 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 86901 00:17:51.739 00:17:51.739 job0: (groupid=0, jobs=1): err= 0: pid=86923: Sat Dec 7 22:49:04 2024 00:17:51.739 read: IOPS=805, BW=3223KiB/s (3301kB/s)(189MiB/60000msec) 00:17:51.739 slat (usec): min=10, max=19991, avg=14.48, stdev=97.34 00:17:51.739 clat (usec): min=154, max=40682k, avg=1044.72, stdev=185014.95 00:17:51.739 lat (usec): min=165, max=40682k, avg=1059.20, stdev=185014.97 00:17:51.739 clat percentiles (usec): 00:17:51.739 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 184], 00:17:51.739 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:17:51.739 | 70.00th=[ 215], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 245], 00:17:51.739 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 338], 99.95th=[ 392], 00:17:51.739 | 99.99th=[ 742] 00:17:51.739 write: IOPS=810, BW=3243KiB/s (3320kB/s)(190MiB/60000msec); 0 zone resets 00:17:51.739 slat (usec): min=13, max=583, avg=20.57, stdev= 6.71 00:17:51.739 clat (usec): min=3, max=743, avg=156.70, stdev=23.03 00:17:51.739 lat (usec): min=131, max=824, avg=177.27, stdev=24.56 00:17:51.739 clat percentiles (usec): 00:17:51.739 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 131], 20.00th=[ 139], 00:17:51.739 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 159], 00:17:51.739 | 70.00th=[ 165], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 196], 00:17:51.739 | 99.00th=[ 219], 99.50th=[ 229], 99.90th=[ 277], 99.95th=[ 334], 00:17:51.739 | 99.99th=[ 627] 00:17:51.739 bw ( KiB/s): min= 648, max=12288, per=100.00%, avg=9766.44, stdev=2287.56, samples=39 00:17:51.739 iops : min= 162, max= 3072, avg=2441.59, stdev=571.89, samples=39 00:17:51.739 lat (usec) : 4=0.01%, 100=0.01%, 250=98.05%, 500=1.92%, 750=0.02% 00:17:51.739 lat (usec) : 1000=0.01% 00:17:51.739 lat (msec) : 2=0.01%, >=2000=0.01% 00:17:51.739 cpu : usr=0.63%, sys=2.18%, ctx=97015, majf=0, minf=5 00:17:51.739 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:51.739 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.739 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.739 issued rwts: total=48349,48640,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.739 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:51.739 00:17:51.739 Run status group 0 (all jobs): 00:17:51.739 READ: bw=3223KiB/s (3301kB/s), 3223KiB/s-3223KiB/s (3301kB/s-3301kB/s), io=189MiB (198MB), run=60000-60000msec 00:17:51.739 WRITE: bw=3243KiB/s (3320kB/s), 3243KiB/s-3243KiB/s (3320kB/s-3320kB/s), io=190MiB (199MB), run=60000-60000msec 00:17:51.739 00:17:51.739 Disk stats (read/write): 00:17:51.739 nvme0n1: ios=48358/48301, merge=0/0, ticks=10420/8322, in_queue=18742, util=99.83% 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:51.739 nvmf hotplug test: fio successful as expected 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.739 rmmod nvme_tcp 00:17:51.739 rmmod nvme_fabrics 00:17:51.739 rmmod nvme_keyring 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 86840 ']' 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 86840 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 86840 ']' 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 86840 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86840 00:17:51.739 killing process with pid 86840 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86840' 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 86840 00:17:51.739 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 86840 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:51.740 00:17:51.740 real 1m4.121s 00:17:51.740 user 3m49.645s 00:17:51.740 sys 0m22.799s 00:17:51.740 ************************************ 00:17:51.740 END TEST nvmf_initiator_timeout 00:17:51.740 ************************************ 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:51.740 00:17:51.740 real 6m47.796s 00:17:51.740 user 16m59.498s 00:17:51.740 sys 1m52.359s 00:17:51.740 ************************************ 00:17:51.740 END TEST nvmf_target_extra 00:17:51.740 ************************************ 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:51.740 22:49:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:51.740 22:49:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:51.740 22:49:04 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:51.740 22:49:04 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.740 22:49:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:51.740 ************************************ 00:17:51.740 START TEST nvmf_host 00:17:51.740 ************************************ 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:51.740 * Looking for test storage... 00:17:51.740 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.740 --rc genhtml_branch_coverage=1 00:17:51.740 --rc genhtml_function_coverage=1 00:17:51.740 --rc genhtml_legend=1 00:17:51.740 --rc geninfo_all_blocks=1 00:17:51.740 --rc geninfo_unexecuted_blocks=1 00:17:51.740 00:17:51.740 ' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.740 --rc genhtml_branch_coverage=1 00:17:51.740 --rc genhtml_function_coverage=1 00:17:51.740 --rc genhtml_legend=1 00:17:51.740 --rc geninfo_all_blocks=1 00:17:51.740 --rc geninfo_unexecuted_blocks=1 00:17:51.740 00:17:51.740 ' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.740 --rc genhtml_branch_coverage=1 00:17:51.740 --rc genhtml_function_coverage=1 00:17:51.740 --rc genhtml_legend=1 00:17:51.740 --rc geninfo_all_blocks=1 00:17:51.740 --rc geninfo_unexecuted_blocks=1 00:17:51.740 00:17:51.740 ' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.740 --rc genhtml_branch_coverage=1 00:17:51.740 --rc genhtml_function_coverage=1 00:17:51.740 --rc genhtml_legend=1 00:17:51.740 --rc geninfo_all_blocks=1 00:17:51.740 --rc geninfo_unexecuted_blocks=1 00:17:51.740 00:17:51.740 ' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.740 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.741 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:51.741 ************************************ 00:17:51.741 START TEST nvmf_identify 00:17:51.741 ************************************ 00:17:51.741 22:49:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:51.741 * Looking for test storage... 00:17:51.741 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:51.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.741 --rc genhtml_branch_coverage=1 00:17:51.741 --rc genhtml_function_coverage=1 00:17:51.741 --rc genhtml_legend=1 00:17:51.741 --rc geninfo_all_blocks=1 00:17:51.741 --rc geninfo_unexecuted_blocks=1 00:17:51.741 00:17:51.741 ' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:51.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.741 --rc genhtml_branch_coverage=1 00:17:51.741 --rc genhtml_function_coverage=1 00:17:51.741 --rc genhtml_legend=1 00:17:51.741 --rc geninfo_all_blocks=1 00:17:51.741 --rc geninfo_unexecuted_blocks=1 00:17:51.741 00:17:51.741 ' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:51.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.741 --rc genhtml_branch_coverage=1 00:17:51.741 --rc genhtml_function_coverage=1 00:17:51.741 --rc genhtml_legend=1 00:17:51.741 --rc geninfo_all_blocks=1 00:17:51.741 --rc geninfo_unexecuted_blocks=1 00:17:51.741 00:17:51.741 ' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:51.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.741 --rc genhtml_branch_coverage=1 00:17:51.741 --rc genhtml_function_coverage=1 00:17:51.741 --rc genhtml_legend=1 00:17:51.741 --rc geninfo_all_blocks=1 00:17:51.741 --rc geninfo_unexecuted_blocks=1 00:17:51.741 00:17:51.741 ' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.741 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:51.742 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:51.742 Cannot find device "nvmf_init_br" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:51.742 Cannot find device "nvmf_init_br2" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:51.742 Cannot find device "nvmf_tgt_br" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:51.742 Cannot find device "nvmf_tgt_br2" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:51.742 Cannot find device "nvmf_init_br" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:51.742 Cannot find device "nvmf_init_br2" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:51.742 Cannot find device "nvmf_tgt_br" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:51.742 Cannot find device "nvmf_tgt_br2" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:51.742 Cannot find device "nvmf_br" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:51.742 Cannot find device "nvmf_init_if" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:51.742 Cannot find device "nvmf_init_if2" 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:51.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:51.742 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:51.742 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:51.743 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:51.743 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:51.743 00:17:51.743 --- 10.0.0.3 ping statistics --- 00:17:51.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.743 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:51.743 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:51.743 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:17:51.743 00:17:51.743 --- 10.0.0.4 ping statistics --- 00:17:51.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.743 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:51.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:51.743 00:17:51.743 --- 10.0.0.1 ping statistics --- 00:17:51.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.743 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:51.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:17:51.743 00:17:51.743 --- 10.0.0.2 ping statistics --- 00:17:51.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.743 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87852 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87852 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 87852 ']' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 [2024-12-07 22:49:05.637853] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:51.743 [2024-12-07 22:49:05.637954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.743 [2024-12-07 22:49:05.779975] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:51.743 [2024-12-07 22:49:05.822767] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.743 [2024-12-07 22:49:05.823122] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.743 [2024-12-07 22:49:05.823328] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.743 [2024-12-07 22:49:05.823482] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.743 [2024-12-07 22:49:05.823532] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.743 [2024-12-07 22:49:05.823803] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.743 [2024-12-07 22:49:05.823946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.743 [2024-12-07 22:49:05.824549] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:51.743 [2024-12-07 22:49:05.824595] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.743 [2024-12-07 22:49:05.858466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 [2024-12-07 22:49:05.926000] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 Malloc0 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.743 22:49:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.743 [2024-12-07 22:49:06.014934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:51.743 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:51.744 [ 00:17:51.744 { 00:17:51.744 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:51.744 "subtype": "Discovery", 00:17:51.744 "listen_addresses": [ 00:17:51.744 { 00:17:51.744 "trtype": "TCP", 00:17:51.744 "adrfam": "IPv4", 00:17:51.744 "traddr": "10.0.0.3", 00:17:51.744 "trsvcid": "4420" 00:17:51.744 } 00:17:51.744 ], 00:17:51.744 "allow_any_host": true, 00:17:51.744 "hosts": [] 00:17:51.744 }, 00:17:51.744 { 00:17:51.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.744 "subtype": "NVMe", 00:17:51.744 "listen_addresses": [ 00:17:51.744 { 00:17:51.744 "trtype": "TCP", 00:17:51.744 "adrfam": "IPv4", 00:17:51.744 "traddr": "10.0.0.3", 00:17:51.744 "trsvcid": "4420" 00:17:51.744 } 00:17:51.744 ], 00:17:51.744 "allow_any_host": true, 00:17:51.744 "hosts": [], 00:17:51.744 "serial_number": "SPDK00000000000001", 00:17:51.744 "model_number": "SPDK bdev Controller", 00:17:51.744 "max_namespaces": 32, 00:17:51.744 "min_cntlid": 1, 00:17:51.744 "max_cntlid": 65519, 00:17:51.744 "namespaces": [ 00:17:51.744 { 00:17:51.744 "nsid": 1, 00:17:51.744 "bdev_name": "Malloc0", 00:17:51.744 "name": "Malloc0", 00:17:51.744 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:51.744 "eui64": "ABCDEF0123456789", 00:17:51.744 "uuid": "f870e712-d798-41d3-b438-11d2ea77f8ec" 00:17:51.744 } 00:17:51.744 ] 00:17:51.744 } 00:17:51.744 ] 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.744 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:51.744 [2024-12-07 22:49:06.072761] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:51.744 [2024-12-07 22:49:06.072812] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87881 ] 00:17:51.744 [2024-12-07 22:49:06.216630] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:51.744 [2024-12-07 22:49:06.216716] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:51.744 [2024-12-07 22:49:06.216725] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:51.744 [2024-12-07 22:49:06.216739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:51.744 [2024-12-07 22:49:06.216750] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:51.744 [2024-12-07 22:49:06.217127] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:51.744 [2024-12-07 22:49:06.217210] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x4b6ac0 0 00:17:51.744 [2024-12-07 22:49:06.230927] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:51.744 [2024-12-07 22:49:06.230958] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:51.744 [2024-12-07 22:49:06.230976] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:51.744 [2024-12-07 22:49:06.230981] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:51.744 [2024-12-07 22:49:06.231020] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.231029] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.231035] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.744 [2024-12-07 22:49:06.231052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:51.744 [2024-12-07 22:49:06.231090] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.744 [2024-12-07 22:49:06.238913] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.744 [2024-12-07 22:49:06.238939] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.744 [2024-12-07 22:49:06.238946] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.238957] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.744 [2024-12-07 22:49:06.238972] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:51.744 [2024-12-07 22:49:06.238982] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:51.744 [2024-12-07 22:49:06.238990] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:51.744 [2024-12-07 22:49:06.239021] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239026] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239030] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.744 [2024-12-07 22:49:06.239040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.744 [2024-12-07 22:49:06.239067] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.744 [2024-12-07 22:49:06.239127] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.744 [2024-12-07 22:49:06.239134] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.744 [2024-12-07 22:49:06.239137] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239141] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.744 [2024-12-07 22:49:06.239148] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:51.744 [2024-12-07 22:49:06.239165] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:51.744 [2024-12-07 22:49:06.239190] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239210] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239214] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.744 [2024-12-07 22:49:06.239223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.744 [2024-12-07 22:49:06.239259] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.744 [2024-12-07 22:49:06.239314] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.744 [2024-12-07 22:49:06.239321] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.744 [2024-12-07 22:49:06.239325] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239329] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.744 [2024-12-07 22:49:06.239336] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:51.744 [2024-12-07 22:49:06.239345] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:51.744 [2024-12-07 22:49:06.239353] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.744 [2024-12-07 22:49:06.239370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.744 [2024-12-07 22:49:06.239389] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.744 [2024-12-07 22:49:06.239437] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.744 [2024-12-07 22:49:06.239459] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.744 [2024-12-07 22:49:06.239463] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239467] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.744 [2024-12-07 22:49:06.239473] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:51.744 [2024-12-07 22:49:06.239484] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.744 [2024-12-07 22:49:06.239492] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.744 [2024-12-07 22:49:06.239500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.744 [2024-12-07 22:49:06.239518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.744 [2024-12-07 22:49:06.239576] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.744 [2024-12-07 22:49:06.239583] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.744 [2024-12-07 22:49:06.239586] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.239590] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.745 [2024-12-07 22:49:06.239595] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:51.745 [2024-12-07 22:49:06.239601] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:51.745 [2024-12-07 22:49:06.239608] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:51.745 [2024-12-07 22:49:06.239714] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:51.745 [2024-12-07 22:49:06.239719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:51.745 [2024-12-07 22:49:06.239728] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.239733] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.239737] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.239744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.745 [2024-12-07 22:49:06.239763] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.745 [2024-12-07 22:49:06.239812] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.745 [2024-12-07 22:49:06.239819] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.745 [2024-12-07 22:49:06.239822] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.239827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.745 [2024-12-07 22:49:06.239832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:51.745 [2024-12-07 22:49:06.239842] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.239847] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.239850] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.239858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.745 [2024-12-07 22:49:06.239876] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.745 [2024-12-07 22:49:06.239923] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.745 [2024-12-07 22:49:06.239930] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.745 [2024-12-07 22:49:06.239934] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.239938] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.745 [2024-12-07 22:49:06.239942] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:51.745 [2024-12-07 22:49:06.239961] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:51.745 [2024-12-07 22:49:06.239971] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:51.745 [2024-12-07 22:49:06.239986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:51.745 [2024-12-07 22:49:06.239997] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240002] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.745 [2024-12-07 22:49:06.240031] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.745 [2024-12-07 22:49:06.240111] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.745 [2024-12-07 22:49:06.240118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.745 [2024-12-07 22:49:06.240122] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240126] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b6ac0): datao=0, datal=4096, cccid=0 00:17:51.745 [2024-12-07 22:49:06.240131] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4ef7c0) on tqpair(0x4b6ac0): expected_datao=0, payload_size=4096 00:17:51.745 [2024-12-07 22:49:06.240136] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240145] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240149] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.745 [2024-12-07 22:49:06.240164] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.745 [2024-12-07 22:49:06.240167] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240171] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.745 [2024-12-07 22:49:06.240180] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:51.745 [2024-12-07 22:49:06.240185] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:51.745 [2024-12-07 22:49:06.240190] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:51.745 [2024-12-07 22:49:06.240195] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:51.745 [2024-12-07 22:49:06.240200] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:51.745 [2024-12-07 22:49:06.240205] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:51.745 [2024-12-07 22:49:06.240214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:51.745 [2024-12-07 22:49:06.240226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240231] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240235] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.745 [2024-12-07 22:49:06.240264] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.745 [2024-12-07 22:49:06.240341] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.745 [2024-12-07 22:49:06.240348] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.745 [2024-12-07 22:49:06.240352] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240356] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.745 [2024-12-07 22:49:06.240364] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240369] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240373] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.745 [2024-12-07 22:49:06.240387] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240391] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240395] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.745 [2024-12-07 22:49:06.240407] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240412] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240415] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.745 [2024-12-07 22:49:06.240428] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240432] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240436] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.745 [2024-12-07 22:49:06.240447] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:51.745 [2024-12-07 22:49:06.240461] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:51.745 [2024-12-07 22:49:06.240469] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240473] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.745 [2024-12-07 22:49:06.240501] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef7c0, cid 0, qid 0 00:17:51.745 [2024-12-07 22:49:06.240508] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4ef940, cid 1, qid 0 00:17:51.745 [2024-12-07 22:49:06.240513] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efac0, cid 2, qid 0 00:17:51.745 [2024-12-07 22:49:06.240518] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.745 [2024-12-07 22:49:06.240523] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efdc0, cid 4, qid 0 00:17:51.745 [2024-12-07 22:49:06.240619] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.745 [2024-12-07 22:49:06.240626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.745 [2024-12-07 22:49:06.240630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efdc0) on tqpair=0x4b6ac0 00:17:51.745 [2024-12-07 22:49:06.240639] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:51.745 [2024-12-07 22:49:06.240645] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:51.745 [2024-12-07 22:49:06.240656] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.745 [2024-12-07 22:49:06.240661] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b6ac0) 00:17:51.745 [2024-12-07 22:49:06.240669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.745 [2024-12-07 22:49:06.240687] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efdc0, cid 4, qid 0 00:17:51.746 [2024-12-07 22:49:06.240742] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.746 [2024-12-07 22:49:06.240749] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.746 [2024-12-07 22:49:06.240752] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240756] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b6ac0): datao=0, datal=4096, cccid=4 00:17:51.746 [2024-12-07 22:49:06.240761] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4efdc0) on tqpair(0x4b6ac0): expected_datao=0, payload_size=4096 00:17:51.746 [2024-12-07 22:49:06.240765] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240773] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240777] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240785] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.746 [2024-12-07 22:49:06.240791] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.746 [2024-12-07 22:49:06.240795] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240799] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efdc0) on tqpair=0x4b6ac0 00:17:51.746 [2024-12-07 22:49:06.240812] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:51.746 [2024-12-07 22:49:06.240840] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240847] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b6ac0) 00:17:51.746 [2024-12-07 22:49:06.240855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.746 [2024-12-07 22:49:06.240863] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240867] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.240871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x4b6ac0) 00:17:51.746 [2024-12-07 22:49:06.240877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.746 [2024-12-07 22:49:06.240914] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efdc0, cid 4, qid 0 00:17:51.746 [2024-12-07 22:49:06.240922] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4eff40, cid 5, qid 0 00:17:51.746 [2024-12-07 22:49:06.241007] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.746 [2024-12-07 22:49:06.241014] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.746 [2024-12-07 22:49:06.241018] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241021] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b6ac0): datao=0, datal=1024, cccid=4 00:17:51.746 [2024-12-07 22:49:06.241026] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4efdc0) on tqpair(0x4b6ac0): expected_datao=0, payload_size=1024 00:17:51.746 [2024-12-07 22:49:06.241031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241038] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241042] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241047] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.746 [2024-12-07 22:49:06.241053] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.746 [2024-12-07 22:49:06.241057] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241061] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4eff40) on tqpair=0x4b6ac0 00:17:51.746 [2024-12-07 22:49:06.241079] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.746 [2024-12-07 22:49:06.241087] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.746 [2024-12-07 22:49:06.241091] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241095] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efdc0) on tqpair=0x4b6ac0 00:17:51.746 [2024-12-07 22:49:06.241106] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241112] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b6ac0) 00:17:51.746 [2024-12-07 22:49:06.241119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.746 [2024-12-07 22:49:06.241143] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efdc0, cid 4, qid 0 00:17:51.746 [2024-12-07 22:49:06.241227] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.746 [2024-12-07 22:49:06.241234] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.746 [2024-12-07 22:49:06.241237] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241241] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b6ac0): datao=0, datal=3072, cccid=4 00:17:51.746 [2024-12-07 22:49:06.241246] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4efdc0) on tqpair(0x4b6ac0): expected_datao=0, payload_size=3072 00:17:51.746 [2024-12-07 22:49:06.241251] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241258] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241262] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.746 [2024-12-07 22:49:06.241276] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.746 [2024-12-07 22:49:06.241280] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241284] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efdc0) on tqpair=0x4b6ac0 00:17:51.746 [2024-12-07 22:49:06.241294] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241299] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x4b6ac0) 00:17:51.746 [2024-12-07 22:49:06.241306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.746 [2024-12-07 22:49:06.241330] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efdc0, cid 4, qid 0 00:17:51.746 [2024-12-07 22:49:06.241394] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.746 [2024-12-07 22:49:06.241401] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.746 [2024-12-07 22:49:06.241405] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241409] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x4b6ac0): datao=0, datal=8, cccid=4 00:17:51.746 [2024-12-07 22:49:06.241413] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x4efdc0) on tqpair(0x4b6ac0): expected_datao=0, payload_size=8 00:17:51.746 [2024-12-07 22:49:06.241418] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241425] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241429] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241445] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.746 [2024-12-07 22:49:06.241452] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.746 [2024-12-07 22:49:06.241456] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.746 [2024-12-07 22:49:06.241460] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efdc0) on tqpair=0x4b6ac0 00:17:51.746 ===================================================== 00:17:51.746 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:51.746 ===================================================== 00:17:51.746 Controller Capabilities/Features 00:17:51.746 ================================ 00:17:51.746 Vendor ID: 0000 00:17:51.746 Subsystem Vendor ID: 0000 00:17:51.746 Serial Number: .................... 00:17:51.746 Model Number: ........................................ 00:17:51.746 Firmware Version: 24.09.1 00:17:51.746 Recommended Arb Burst: 0 00:17:51.746 IEEE OUI Identifier: 00 00 00 00:17:51.746 Multi-path I/O 00:17:51.746 May have multiple subsystem ports: No 00:17:51.746 May have multiple controllers: No 00:17:51.746 Associated with SR-IOV VF: No 00:17:51.746 Max Data Transfer Size: 131072 00:17:51.746 Max Number of Namespaces: 0 00:17:51.746 Max Number of I/O Queues: 1024 00:17:51.746 NVMe Specification Version (VS): 1.3 00:17:51.746 NVMe Specification Version (Identify): 1.3 00:17:51.746 Maximum Queue Entries: 128 00:17:51.746 Contiguous Queues Required: Yes 00:17:51.746 Arbitration Mechanisms Supported 00:17:51.746 Weighted Round Robin: Not Supported 00:17:51.746 Vendor Specific: Not Supported 00:17:51.746 Reset Timeout: 15000 ms 00:17:51.746 Doorbell Stride: 4 bytes 00:17:51.746 NVM Subsystem Reset: Not Supported 00:17:51.746 Command Sets Supported 00:17:51.746 NVM Command Set: Supported 00:17:51.746 Boot Partition: Not Supported 00:17:51.746 Memory Page Size Minimum: 4096 bytes 00:17:51.746 Memory Page Size Maximum: 4096 bytes 00:17:51.746 Persistent Memory Region: Not Supported 00:17:51.746 Optional Asynchronous Events Supported 00:17:51.746 Namespace Attribute Notices: Not Supported 00:17:51.746 Firmware Activation Notices: Not Supported 00:17:51.746 ANA Change Notices: Not Supported 00:17:51.746 PLE Aggregate Log Change Notices: Not Supported 00:17:51.746 LBA Status Info Alert Notices: Not Supported 00:17:51.746 EGE Aggregate Log Change Notices: Not Supported 00:17:51.746 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.746 Zone Descriptor Change Notices: Not Supported 00:17:51.746 Discovery Log Change Notices: Supported 00:17:51.746 Controller Attributes 00:17:51.746 128-bit Host Identifier: Not Supported 00:17:51.746 Non-Operational Permissive Mode: Not Supported 00:17:51.746 NVM Sets: Not Supported 00:17:51.746 Read Recovery Levels: Not Supported 00:17:51.746 Endurance Groups: Not Supported 00:17:51.746 Predictable Latency Mode: Not Supported 00:17:51.746 Traffic Based Keep ALive: Not Supported 00:17:51.746 Namespace Granularity: Not Supported 00:17:51.746 SQ Associations: Not Supported 00:17:51.746 UUID List: Not Supported 00:17:51.746 Multi-Domain Subsystem: Not Supported 00:17:51.746 Fixed Capacity Management: Not Supported 00:17:51.746 Variable Capacity Management: Not Supported 00:17:51.746 Delete Endurance Group: Not Supported 00:17:51.746 Delete NVM Set: Not Supported 00:17:51.746 Extended LBA Formats Supported: Not Supported 00:17:51.746 Flexible Data Placement Supported: Not Supported 00:17:51.747 00:17:51.747 Controller Memory Buffer Support 00:17:51.747 ================================ 00:17:51.747 Supported: No 00:17:51.747 00:17:51.747 Persistent Memory Region Support 00:17:51.747 ================================ 00:17:51.747 Supported: No 00:17:51.747 00:17:51.747 Admin Command Set Attributes 00:17:51.747 ============================ 00:17:51.747 Security Send/Receive: Not Supported 00:17:51.747 Format NVM: Not Supported 00:17:51.747 Firmware Activate/Download: Not Supported 00:17:51.747 Namespace Management: Not Supported 00:17:51.747 Device Self-Test: Not Supported 00:17:51.747 Directives: Not Supported 00:17:51.747 NVMe-MI: Not Supported 00:17:51.747 Virtualization Management: Not Supported 00:17:51.747 Doorbell Buffer Config: Not Supported 00:17:51.747 Get LBA Status Capability: Not Supported 00:17:51.747 Command & Feature Lockdown Capability: Not Supported 00:17:51.747 Abort Command Limit: 1 00:17:51.747 Async Event Request Limit: 4 00:17:51.747 Number of Firmware Slots: N/A 00:17:51.747 Firmware Slot 1 Read-Only: N/A 00:17:51.747 Firmware Activation Without Reset: N/A 00:17:51.747 Multiple Update Detection Support: N/A 00:17:51.747 Firmware Update Granularity: No Information Provided 00:17:51.747 Per-Namespace SMART Log: No 00:17:51.747 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.747 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:51.747 Command Effects Log Page: Not Supported 00:17:51.747 Get Log Page Extended Data: Supported 00:17:51.747 Telemetry Log Pages: Not Supported 00:17:51.747 Persistent Event Log Pages: Not Supported 00:17:51.747 Supported Log Pages Log Page: May Support 00:17:51.747 Commands Supported & Effects Log Page: Not Supported 00:17:51.747 Feature Identifiers & Effects Log Page:May Support 00:17:51.747 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.747 Data Area 4 for Telemetry Log: Not Supported 00:17:51.747 Error Log Page Entries Supported: 128 00:17:51.747 Keep Alive: Not Supported 00:17:51.747 00:17:51.747 NVM Command Set Attributes 00:17:51.747 ========================== 00:17:51.747 Submission Queue Entry Size 00:17:51.747 Max: 1 00:17:51.747 Min: 1 00:17:51.747 Completion Queue Entry Size 00:17:51.747 Max: 1 00:17:51.747 Min: 1 00:17:51.747 Number of Namespaces: 0 00:17:51.747 Compare Command: Not Supported 00:17:51.747 Write Uncorrectable Command: Not Supported 00:17:51.747 Dataset Management Command: Not Supported 00:17:51.747 Write Zeroes Command: Not Supported 00:17:51.747 Set Features Save Field: Not Supported 00:17:51.747 Reservations: Not Supported 00:17:51.747 Timestamp: Not Supported 00:17:51.747 Copy: Not Supported 00:17:51.747 Volatile Write Cache: Not Present 00:17:51.747 Atomic Write Unit (Normal): 1 00:17:51.747 Atomic Write Unit (PFail): 1 00:17:51.747 Atomic Compare & Write Unit: 1 00:17:51.747 Fused Compare & Write: Supported 00:17:51.747 Scatter-Gather List 00:17:51.747 SGL Command Set: Supported 00:17:51.747 SGL Keyed: Supported 00:17:51.747 SGL Bit Bucket Descriptor: Not Supported 00:17:51.747 SGL Metadata Pointer: Not Supported 00:17:51.747 Oversized SGL: Not Supported 00:17:51.747 SGL Metadata Address: Not Supported 00:17:51.747 SGL Offset: Supported 00:17:51.747 Transport SGL Data Block: Not Supported 00:17:51.747 Replay Protected Memory Block: Not Supported 00:17:51.747 00:17:51.747 Firmware Slot Information 00:17:51.747 ========================= 00:17:51.747 Active slot: 0 00:17:51.747 00:17:51.747 00:17:51.747 Error Log 00:17:51.747 ========= 00:17:51.747 00:17:51.747 Active Namespaces 00:17:51.747 ================= 00:17:51.747 Discovery Log Page 00:17:51.747 ================== 00:17:51.747 Generation Counter: 2 00:17:51.747 Number of Records: 2 00:17:51.747 Record Format: 0 00:17:51.747 00:17:51.747 Discovery Log Entry 0 00:17:51.747 ---------------------- 00:17:51.747 Transport Type: 3 (TCP) 00:17:51.747 Address Family: 1 (IPv4) 00:17:51.747 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:51.747 Entry Flags: 00:17:51.747 Duplicate Returned Information: 1 00:17:51.747 Explicit Persistent Connection Support for Discovery: 1 00:17:51.747 Transport Requirements: 00:17:51.747 Secure Channel: Not Required 00:17:51.747 Port ID: 0 (0x0000) 00:17:51.747 Controller ID: 65535 (0xffff) 00:17:51.747 Admin Max SQ Size: 128 00:17:51.747 Transport Service Identifier: 4420 00:17:51.747 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:51.747 Transport Address: 10.0.0.3 00:17:51.747 Discovery Log Entry 1 00:17:51.747 ---------------------- 00:17:51.747 Transport Type: 3 (TCP) 00:17:51.747 Address Family: 1 (IPv4) 00:17:51.747 Subsystem Type: 2 (NVM Subsystem) 00:17:51.747 Entry Flags: 00:17:51.747 Duplicate Returned Information: 0 00:17:51.747 Explicit Persistent Connection Support for Discovery: 0 00:17:51.747 Transport Requirements: 00:17:51.747 Secure Channel: Not Required 00:17:51.747 Port ID: 0 (0x0000) 00:17:51.747 Controller ID: 65535 (0xffff) 00:17:51.747 Admin Max SQ Size: 128 00:17:51.747 Transport Service Identifier: 4420 00:17:51.747 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:51.747 Transport Address: 10.0.0.3 [2024-12-07 22:49:06.241564] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:51.747 [2024-12-07 22:49:06.241577] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef7c0) on tqpair=0x4b6ac0 00:17:51.747 [2024-12-07 22:49:06.241584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.747 [2024-12-07 22:49:06.241590] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4ef940) on tqpair=0x4b6ac0 00:17:51.747 [2024-12-07 22:49:06.241595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.747 [2024-12-07 22:49:06.241600] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efac0) on tqpair=0x4b6ac0 00:17:51.747 [2024-12-07 22:49:06.241605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.747 [2024-12-07 22:49:06.241610] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.747 [2024-12-07 22:49:06.241614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.747 [2024-12-07 22:49:06.241623] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241628] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241632] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.747 [2024-12-07 22:49:06.241640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.747 [2024-12-07 22:49:06.241661] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.747 [2024-12-07 22:49:06.241710] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.747 [2024-12-07 22:49:06.241717] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.747 [2024-12-07 22:49:06.241721] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241725] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.747 [2024-12-07 22:49:06.241733] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241738] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241742] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.747 [2024-12-07 22:49:06.241749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.747 [2024-12-07 22:49:06.241771] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.747 [2024-12-07 22:49:06.241828] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.747 [2024-12-07 22:49:06.241835] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.747 [2024-12-07 22:49:06.241838] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241842] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.747 [2024-12-07 22:49:06.241847] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:51.747 [2024-12-07 22:49:06.241852] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:51.747 [2024-12-07 22:49:06.241862] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241867] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.747 [2024-12-07 22:49:06.241878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.747 [2024-12-07 22:49:06.241908] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.747 [2024-12-07 22:49:06.241953] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.747 [2024-12-07 22:49:06.241960] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.747 [2024-12-07 22:49:06.241963] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.747 [2024-12-07 22:49:06.241967] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.241979] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.241984] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.241987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.241995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242014] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242060] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242066] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242070] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242074] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242084] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242119] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242167] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242173] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242177] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242181] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242191] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242196] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242200] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242224] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242270] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242287] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242291] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242306] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242311] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242340] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242386] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242393] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242397] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242401] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242412] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242417] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242421] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242446] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242492] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242503] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242507] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242517] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242523] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242526] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242551] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242606] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242613] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242617] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242621] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242638] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242642] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242646] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242720] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242727] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242738] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242742] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242746] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242770] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.242812] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.242819] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.242823] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242827] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.242837] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242842] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.242846] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.242853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.242870] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.246890] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.246910] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.246931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.246936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.246949] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.246954] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.246958] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x4b6ac0) 00:17:51.748 [2024-12-07 22:49:06.246967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.748 [2024-12-07 22:49:06.246991] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x4efc40, cid 3, qid 0 00:17:51.748 [2024-12-07 22:49:06.247036] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.748 [2024-12-07 22:49:06.247043] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.748 [2024-12-07 22:49:06.247046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.748 [2024-12-07 22:49:06.247050] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x4efc40) on tqpair=0x4b6ac0 00:17:51.748 [2024-12-07 22:49:06.247059] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:51.748 00:17:51.748 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:51.748 [2024-12-07 22:49:06.288931] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:51.749 [2024-12-07 22:49:06.288978] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87883 ] 00:17:51.749 [2024-12-07 22:49:06.426332] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:51.749 [2024-12-07 22:49:06.426407] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:51.749 [2024-12-07 22:49:06.426415] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:51.749 [2024-12-07 22:49:06.426426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:51.749 [2024-12-07 22:49:06.426434] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:51.749 [2024-12-07 22:49:06.426745] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:51.749 [2024-12-07 22:49:06.426809] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2477ac0 0 00:17:51.749 [2024-12-07 22:49:06.439971] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:51.749 [2024-12-07 22:49:06.439995] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:51.749 [2024-12-07 22:49:06.440017] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:51.749 [2024-12-07 22:49:06.440020] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:51.749 [2024-12-07 22:49:06.440050] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.440057] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.440061] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.440074] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:51.749 [2024-12-07 22:49:06.440104] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.749 [2024-12-07 22:49:06.447980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.749 [2024-12-07 22:49:06.448001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.749 [2024-12-07 22:49:06.448023] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448028] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.749 [2024-12-07 22:49:06.448038] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:51.749 [2024-12-07 22:49:06.448046] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:51.749 [2024-12-07 22:49:06.448052] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:51.749 [2024-12-07 22:49:06.448067] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448073] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.448086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.749 [2024-12-07 22:49:06.448112] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.749 [2024-12-07 22:49:06.448168] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.749 [2024-12-07 22:49:06.448175] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.749 [2024-12-07 22:49:06.448179] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448183] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.749 [2024-12-07 22:49:06.448197] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:51.749 [2024-12-07 22:49:06.448205] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:51.749 [2024-12-07 22:49:06.448228] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448237] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.448260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.749 [2024-12-07 22:49:06.448280] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.749 [2024-12-07 22:49:06.448327] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.749 [2024-12-07 22:49:06.448334] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.749 [2024-12-07 22:49:06.448338] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448342] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.749 [2024-12-07 22:49:06.448348] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:51.749 [2024-12-07 22:49:06.448357] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:51.749 [2024-12-07 22:49:06.448365] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448369] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448373] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.448381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.749 [2024-12-07 22:49:06.448399] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.749 [2024-12-07 22:49:06.448449] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.749 [2024-12-07 22:49:06.448456] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.749 [2024-12-07 22:49:06.448460] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448465] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.749 [2024-12-07 22:49:06.448470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:51.749 [2024-12-07 22:49:06.448481] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448486] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448490] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.448498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.749 [2024-12-07 22:49:06.448515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.749 [2024-12-07 22:49:06.448562] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.749 [2024-12-07 22:49:06.448570] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.749 [2024-12-07 22:49:06.448574] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448578] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.749 [2024-12-07 22:49:06.448583] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:51.749 [2024-12-07 22:49:06.448588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:51.749 [2024-12-07 22:49:06.448597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:51.749 [2024-12-07 22:49:06.448703] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:51.749 [2024-12-07 22:49:06.448708] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:51.749 [2024-12-07 22:49:06.448717] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448722] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448726] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.448733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.749 [2024-12-07 22:49:06.448752] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.749 [2024-12-07 22:49:06.448800] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.749 [2024-12-07 22:49:06.448814] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.749 [2024-12-07 22:49:06.448818] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448822] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.749 [2024-12-07 22:49:06.448828] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:51.749 [2024-12-07 22:49:06.448838] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448843] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448847] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.448854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.749 [2024-12-07 22:49:06.448872] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.749 [2024-12-07 22:49:06.448943] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.749 [2024-12-07 22:49:06.448952] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.749 [2024-12-07 22:49:06.448957] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.448961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.749 [2024-12-07 22:49:06.448966] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:51.749 [2024-12-07 22:49:06.448972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:51.749 [2024-12-07 22:49:06.448981] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:51.749 [2024-12-07 22:49:06.448997] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:51.749 [2024-12-07 22:49:06.449008] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.749 [2024-12-07 22:49:06.449012] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.749 [2024-12-07 22:49:06.449020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.749 [2024-12-07 22:49:06.449042] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.750 [2024-12-07 22:49:06.449136] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.750 [2024-12-07 22:49:06.449144] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.750 [2024-12-07 22:49:06.449148] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449152] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=4096, cccid=0 00:17:51.750 [2024-12-07 22:49:06.449158] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b07c0) on tqpair(0x2477ac0): expected_datao=0, payload_size=4096 00:17:51.750 [2024-12-07 22:49:06.449163] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449171] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449175] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449184] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.750 [2024-12-07 22:49:06.449191] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.750 [2024-12-07 22:49:06.449195] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449199] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.750 [2024-12-07 22:49:06.449208] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:51.750 [2024-12-07 22:49:06.449213] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:51.750 [2024-12-07 22:49:06.449218] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:51.750 [2024-12-07 22:49:06.449223] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:51.750 [2024-12-07 22:49:06.449228] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:51.750 [2024-12-07 22:49:06.449233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449243] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449255] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449260] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449264] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.750 [2024-12-07 22:49:06.449307] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.750 [2024-12-07 22:49:06.449355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.750 [2024-12-07 22:49:06.449362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.750 [2024-12-07 22:49:06.449366] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.750 [2024-12-07 22:49:06.449378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449383] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449387] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.750 [2024-12-07 22:49:06.449400] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449404] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449408] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.750 [2024-12-07 22:49:06.449420] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449428] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.750 [2024-12-07 22:49:06.449440] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449444] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449448] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.750 [2024-12-07 22:49:06.449459] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449472] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449480] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449484] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.750 [2024-12-07 22:49:06.449512] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b07c0, cid 0, qid 0 00:17:51.750 [2024-12-07 22:49:06.449519] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0940, cid 1, qid 0 00:17:51.750 [2024-12-07 22:49:06.449524] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0ac0, cid 2, qid 0 00:17:51.750 [2024-12-07 22:49:06.449529] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.750 [2024-12-07 22:49:06.449534] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0dc0, cid 4, qid 0 00:17:51.750 [2024-12-07 22:49:06.449615] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.750 [2024-12-07 22:49:06.449622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.750 [2024-12-07 22:49:06.449626] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449630] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0dc0) on tqpair=0x2477ac0 00:17:51.750 [2024-12-07 22:49:06.449635] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:51.750 [2024-12-07 22:49:06.449641] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449654] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449661] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449668] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449673] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449676] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:51.750 [2024-12-07 22:49:06.449703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0dc0, cid 4, qid 0 00:17:51.750 [2024-12-07 22:49:06.449753] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.750 [2024-12-07 22:49:06.449761] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.750 [2024-12-07 22:49:06.449764] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449769] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0dc0) on tqpair=0x2477ac0 00:17:51.750 [2024-12-07 22:49:06.449832] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449844] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.449853] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449857] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.449865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.750 [2024-12-07 22:49:06.449895] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0dc0, cid 4, qid 0 00:17:51.750 [2024-12-07 22:49:06.449974] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.750 [2024-12-07 22:49:06.449983] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.750 [2024-12-07 22:49:06.449987] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.449991] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=4096, cccid=4 00:17:51.750 [2024-12-07 22:49:06.449997] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b0dc0) on tqpair(0x2477ac0): expected_datao=0, payload_size=4096 00:17:51.750 [2024-12-07 22:49:06.450001] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.450009] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.450014] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.450023] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.750 [2024-12-07 22:49:06.450029] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.750 [2024-12-07 22:49:06.450033] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.450038] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0dc0) on tqpair=0x2477ac0 00:17:51.750 [2024-12-07 22:49:06.450055] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:51.750 [2024-12-07 22:49:06.450066] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.450077] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:51.750 [2024-12-07 22:49:06.450086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.750 [2024-12-07 22:49:06.450090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ac0) 00:17:51.750 [2024-12-07 22:49:06.450098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.750 [2024-12-07 22:49:06.450120] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0dc0, cid 4, qid 0 00:17:51.750 [2024-12-07 22:49:06.450192] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.750 [2024-12-07 22:49:06.450199] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.750 [2024-12-07 22:49:06.450203] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450207] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=4096, cccid=4 00:17:51.751 [2024-12-07 22:49:06.450212] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b0dc0) on tqpair(0x2477ac0): expected_datao=0, payload_size=4096 00:17:51.751 [2024-12-07 22:49:06.450217] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450224] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450229] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450238] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.751 [2024-12-07 22:49:06.450244] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.751 [2024-12-07 22:49:06.450248] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450252] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0dc0) on tqpair=0x2477ac0 00:17:51.751 [2024-12-07 22:49:06.450263] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450274] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450297] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450301] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.450309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.450328] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0dc0, cid 4, qid 0 00:17:51.751 [2024-12-07 22:49:06.450387] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.751 [2024-12-07 22:49:06.450394] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.751 [2024-12-07 22:49:06.450398] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450402] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=4096, cccid=4 00:17:51.751 [2024-12-07 22:49:06.450407] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b0dc0) on tqpair(0x2477ac0): expected_datao=0, payload_size=4096 00:17:51.751 [2024-12-07 22:49:06.450411] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450418] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450422] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450431] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.751 [2024-12-07 22:49:06.450437] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.751 [2024-12-07 22:49:06.450441] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450445] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0dc0) on tqpair=0x2477ac0 00:17:51.751 [2024-12-07 22:49:06.450458] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450468] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450479] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450496] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450502] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:51.751 [2024-12-07 22:49:06.450507] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:51.751 [2024-12-07 22:49:06.450512] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:51.751 [2024-12-07 22:49:06.450528] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450532] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.450540] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.450547] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450552] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450555] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.450562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:51.751 [2024-12-07 22:49:06.450586] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0dc0, cid 4, qid 0 00:17:51.751 [2024-12-07 22:49:06.450594] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0f40, cid 5, qid 0 00:17:51.751 [2024-12-07 22:49:06.450651] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.751 [2024-12-07 22:49:06.450659] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.751 [2024-12-07 22:49:06.450663] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450667] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0dc0) on tqpair=0x2477ac0 00:17:51.751 [2024-12-07 22:49:06.450674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.751 [2024-12-07 22:49:06.450680] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.751 [2024-12-07 22:49:06.450684] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450688] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0f40) on tqpair=0x2477ac0 00:17:51.751 [2024-12-07 22:49:06.450698] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450703] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.450710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.450728] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0f40, cid 5, qid 0 00:17:51.751 [2024-12-07 22:49:06.450774] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.751 [2024-12-07 22:49:06.450782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.751 [2024-12-07 22:49:06.450786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0f40) on tqpair=0x2477ac0 00:17:51.751 [2024-12-07 22:49:06.450800] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450805] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.450812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.450829] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0f40, cid 5, qid 0 00:17:51.751 [2024-12-07 22:49:06.450934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.751 [2024-12-07 22:49:06.450945] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.751 [2024-12-07 22:49:06.450949] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450953] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0f40) on tqpair=0x2477ac0 00:17:51.751 [2024-12-07 22:49:06.450965] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.450971] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.450978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.450998] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0f40, cid 5, qid 0 00:17:51.751 [2024-12-07 22:49:06.451050] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.751 [2024-12-07 22:49:06.451057] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.751 [2024-12-07 22:49:06.451061] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.451065] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0f40) on tqpair=0x2477ac0 00:17:51.751 [2024-12-07 22:49:06.451084] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.451090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.451098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.451106] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.451110] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.451117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.451125] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.451129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.451136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.451144] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.451148] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2477ac0) 00:17:51.751 [2024-12-07 22:49:06.451165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.751 [2024-12-07 22:49:06.451205] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0f40, cid 5, qid 0 00:17:51.751 [2024-12-07 22:49:06.451213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0dc0, cid 4, qid 0 00:17:51.751 [2024-12-07 22:49:06.451218] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b10c0, cid 6, qid 0 00:17:51.751 [2024-12-07 22:49:06.451223] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b1240, cid 7, qid 0 00:17:51.751 [2024-12-07 22:49:06.451363] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.751 [2024-12-07 22:49:06.451371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.751 [2024-12-07 22:49:06.451375] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.451380] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=8192, cccid=5 00:17:51.751 [2024-12-07 22:49:06.451385] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b0f40) on tqpair(0x2477ac0): expected_datao=0, payload_size=8192 00:17:51.751 [2024-12-07 22:49:06.451390] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.751 [2024-12-07 22:49:06.451407] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451412] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451419] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.752 [2024-12-07 22:49:06.451425] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.752 [2024-12-07 22:49:06.451429] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451434] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=512, cccid=4 00:17:51.752 [2024-12-07 22:49:06.451439] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b0dc0) on tqpair(0x2477ac0): expected_datao=0, payload_size=512 00:17:51.752 [2024-12-07 22:49:06.451444] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451450] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451455] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451476] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.752 [2024-12-07 22:49:06.451482] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.752 [2024-12-07 22:49:06.451486] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451504] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=512, cccid=6 00:17:51.752 [2024-12-07 22:49:06.451509] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b10c0) on tqpair(0x2477ac0): expected_datao=0, payload_size=512 00:17:51.752 [2024-12-07 22:49:06.451513] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451519] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451523] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:51.752 [2024-12-07 22:49:06.451535] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:51.752 [2024-12-07 22:49:06.451538] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451542] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2477ac0): datao=0, datal=4096, cccid=7 00:17:51.752 [2024-12-07 22:49:06.451547] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24b1240) on tqpair(0x2477ac0): expected_datao=0, payload_size=4096 00:17:51.752 [2024-12-07 22:49:06.451551] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451558] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451562] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.752 [2024-12-07 22:49:06.451576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.752 [2024-12-07 22:49:06.451580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0f40) on tqpair=0x2477ac0 00:17:51.752 [2024-12-07 22:49:06.451599] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.752 [2024-12-07 22:49:06.451607] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.752 [2024-12-07 22:49:06.451610] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451614] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0dc0) on tqpair=0x2477ac0 00:17:51.752 [2024-12-07 22:49:06.451626] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.752 [2024-12-07 22:49:06.451632] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.752 [2024-12-07 22:49:06.451636] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.752 ===================================================== 00:17:51.752 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.752 ===================================================== 00:17:51.752 Controller Capabilities/Features 00:17:51.752 ================================ 00:17:51.752 Vendor ID: 8086 00:17:51.752 Subsystem Vendor ID: 8086 00:17:51.752 Serial Number: SPDK00000000000001 00:17:51.752 Model Number: SPDK bdev Controller 00:17:51.752 Firmware Version: 24.09.1 00:17:51.752 Recommended Arb Burst: 6 00:17:51.752 IEEE OUI Identifier: e4 d2 5c 00:17:51.752 Multi-path I/O 00:17:51.752 May have multiple subsystem ports: Yes 00:17:51.752 May have multiple controllers: Yes 00:17:51.752 Associated with SR-IOV VF: No 00:17:51.752 Max Data Transfer Size: 131072 00:17:51.752 Max Number of Namespaces: 32 00:17:51.752 Max Number of I/O Queues: 127 00:17:51.752 NVMe Specification Version (VS): 1.3 00:17:51.752 NVMe Specification Version (Identify): 1.3 00:17:51.752 Maximum Queue Entries: 128 00:17:51.752 Contiguous Queues Required: Yes 00:17:51.752 Arbitration Mechanisms Supported 00:17:51.752 Weighted Round Robin: Not Supported 00:17:51.752 Vendor Specific: Not Supported 00:17:51.752 Reset Timeout: 15000 ms 00:17:51.752 Doorbell Stride: 4 bytes 00:17:51.752 NVM Subsystem Reset: Not Supported 00:17:51.752 Command Sets Supported 00:17:51.752 NVM Command Set: Supported 00:17:51.752 Boot Partition: Not Supported 00:17:51.752 Memory Page Size Minimum: 4096 bytes 00:17:51.752 Memory Page Size Maximum: 4096 bytes 00:17:51.752 Persistent Memory Region: Not Supported 00:17:51.752 Optional Asynchronous Events Supported 00:17:51.752 Namespace Attribute Notices: Supported 00:17:51.752 Firmware Activation Notices: Not Supported 00:17:51.752 ANA Change Notices: Not Supported 00:17:51.752 PLE Aggregate Log Change Notices: Not Supported 00:17:51.752 LBA Status Info Alert Notices: Not Supported 00:17:51.752 EGE Aggregate Log Change Notices: Not Supported 00:17:51.752 Normal NVM Subsystem Shutdown event: Not Supported 00:17:51.752 Zone Descriptor Change Notices: Not Supported 00:17:51.752 Discovery Log Change Notices: Not Supported 00:17:51.752 Controller Attributes 00:17:51.752 128-bit Host Identifier: Supported 00:17:51.752 Non-Operational Permissive Mode: Not Supported 00:17:51.752 NVM Sets: Not Supported 00:17:51.752 Read Recovery Levels: Not Supported 00:17:51.752 Endurance Groups: Not Supported 00:17:51.752 Predictable Latency Mode: Not Supported 00:17:51.752 Traffic Based Keep ALive: Not Supported 00:17:51.752 Namespace Granularity: Not Supported 00:17:51.752 SQ Associations: Not Supported 00:17:51.752 UUID List: Not Supported 00:17:51.752 Multi-Domain Subsystem: Not Supported 00:17:51.752 Fixed Capacity Management: Not Supported 00:17:51.752 Variable Capacity Management: Not Supported 00:17:51.752 Delete Endurance Group: Not Supported 00:17:51.752 Delete NVM Set: Not Supported 00:17:51.752 Extended LBA Formats Supported: Not Supported 00:17:51.752 Flexible Data Placement Supported: Not Supported 00:17:51.752 00:17:51.752 Controller Memory Buffer Support 00:17:51.752 ================================ 00:17:51.752 Supported: No 00:17:51.752 00:17:51.752 Persistent Memory Region Support 00:17:51.752 ================================ 00:17:51.752 Supported: No 00:17:51.752 00:17:51.752 Admin Command Set Attributes 00:17:51.752 ============================ 00:17:51.752 Security Send/Receive: Not Supported 00:17:51.752 Format NVM: Not Supported 00:17:51.752 Firmware Activate/Download: Not Supported 00:17:51.752 Namespace Management: Not Supported 00:17:51.752 Device Self-Test: Not Supported 00:17:51.752 Directives: Not Supported 00:17:51.752 NVMe-MI: Not Supported 00:17:51.752 Virtualization Management: Not Supported 00:17:51.752 Doorbell Buffer Config: Not Supported 00:17:51.752 Get LBA Status Capability: Not Supported 00:17:51.752 Command & Feature Lockdown Capability: Not Supported 00:17:51.752 Abort Command Limit: 4 00:17:51.752 Async Event Request Limit: 4 00:17:51.752 Number of Firmware Slots: N/A 00:17:51.752 Firmware Slot 1 Read-Only: N/A 00:17:51.752 Firmware Activation Without Reset: [2024-12-07 22:49:06.451640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b10c0) on tqpair=0x2477ac0 00:17:51.752 [2024-12-07 22:49:06.451648] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.752 [2024-12-07 22:49:06.451654] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.752 [2024-12-07 22:49:06.451658] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.752 [2024-12-07 22:49:06.451662] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b1240) on tqpair=0x2477ac0 00:17:51.752 N/A 00:17:51.752 Multiple Update Detection Support: N/A 00:17:51.752 Firmware Update Granularity: No Information Provided 00:17:51.752 Per-Namespace SMART Log: No 00:17:51.752 Asymmetric Namespace Access Log Page: Not Supported 00:17:51.752 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:51.752 Command Effects Log Page: Supported 00:17:51.752 Get Log Page Extended Data: Supported 00:17:51.752 Telemetry Log Pages: Not Supported 00:17:51.752 Persistent Event Log Pages: Not Supported 00:17:51.752 Supported Log Pages Log Page: May Support 00:17:51.752 Commands Supported & Effects Log Page: Not Supported 00:17:51.752 Feature Identifiers & Effects Log Page:May Support 00:17:51.752 NVMe-MI Commands & Effects Log Page: May Support 00:17:51.752 Data Area 4 for Telemetry Log: Not Supported 00:17:51.752 Error Log Page Entries Supported: 128 00:17:51.752 Keep Alive: Supported 00:17:51.752 Keep Alive Granularity: 10000 ms 00:17:51.752 00:17:51.752 NVM Command Set Attributes 00:17:51.752 ========================== 00:17:51.752 Submission Queue Entry Size 00:17:51.752 Max: 64 00:17:51.752 Min: 64 00:17:51.752 Completion Queue Entry Size 00:17:51.752 Max: 16 00:17:51.752 Min: 16 00:17:51.752 Number of Namespaces: 32 00:17:51.752 Compare Command: Supported 00:17:51.752 Write Uncorrectable Command: Not Supported 00:17:51.752 Dataset Management Command: Supported 00:17:51.752 Write Zeroes Command: Supported 00:17:51.753 Set Features Save Field: Not Supported 00:17:51.753 Reservations: Supported 00:17:51.753 Timestamp: Not Supported 00:17:51.753 Copy: Supported 00:17:51.753 Volatile Write Cache: Present 00:17:51.753 Atomic Write Unit (Normal): 1 00:17:51.753 Atomic Write Unit (PFail): 1 00:17:51.753 Atomic Compare & Write Unit: 1 00:17:51.753 Fused Compare & Write: Supported 00:17:51.753 Scatter-Gather List 00:17:51.753 SGL Command Set: Supported 00:17:51.753 SGL Keyed: Supported 00:17:51.753 SGL Bit Bucket Descriptor: Not Supported 00:17:51.753 SGL Metadata Pointer: Not Supported 00:17:51.753 Oversized SGL: Not Supported 00:17:51.753 SGL Metadata Address: Not Supported 00:17:51.753 SGL Offset: Supported 00:17:51.753 Transport SGL Data Block: Not Supported 00:17:51.753 Replay Protected Memory Block: Not Supported 00:17:51.753 00:17:51.753 Firmware Slot Information 00:17:51.753 ========================= 00:17:51.753 Active slot: 1 00:17:51.753 Slot 1 Firmware Revision: 24.09.1 00:17:51.753 00:17:51.753 00:17:51.753 Commands Supported and Effects 00:17:51.753 ============================== 00:17:51.753 Admin Commands 00:17:51.753 -------------- 00:17:51.753 Get Log Page (02h): Supported 00:17:51.753 Identify (06h): Supported 00:17:51.753 Abort (08h): Supported 00:17:51.753 Set Features (09h): Supported 00:17:51.753 Get Features (0Ah): Supported 00:17:51.753 Asynchronous Event Request (0Ch): Supported 00:17:51.753 Keep Alive (18h): Supported 00:17:51.753 I/O Commands 00:17:51.753 ------------ 00:17:51.753 Flush (00h): Supported LBA-Change 00:17:51.753 Write (01h): Supported LBA-Change 00:17:51.753 Read (02h): Supported 00:17:51.753 Compare (05h): Supported 00:17:51.753 Write Zeroes (08h): Supported LBA-Change 00:17:51.753 Dataset Management (09h): Supported LBA-Change 00:17:51.753 Copy (19h): Supported LBA-Change 00:17:51.753 00:17:51.753 Error Log 00:17:51.753 ========= 00:17:51.753 00:17:51.753 Arbitration 00:17:51.753 =========== 00:17:51.753 Arbitration Burst: 1 00:17:51.753 00:17:51.753 Power Management 00:17:51.753 ================ 00:17:51.753 Number of Power States: 1 00:17:51.753 Current Power State: Power State #0 00:17:51.753 Power State #0: 00:17:51.753 Max Power: 0.00 W 00:17:51.753 Non-Operational State: Operational 00:17:51.753 Entry Latency: Not Reported 00:17:51.753 Exit Latency: Not Reported 00:17:51.753 Relative Read Throughput: 0 00:17:51.753 Relative Read Latency: 0 00:17:51.753 Relative Write Throughput: 0 00:17:51.753 Relative Write Latency: 0 00:17:51.753 Idle Power: Not Reported 00:17:51.753 Active Power: Not Reported 00:17:51.753 Non-Operational Permissive Mode: Not Supported 00:17:51.753 00:17:51.753 Health Information 00:17:51.753 ================== 00:17:51.753 Critical Warnings: 00:17:51.753 Available Spare Space: OK 00:17:51.753 Temperature: OK 00:17:51.753 Device Reliability: OK 00:17:51.753 Read Only: No 00:17:51.753 Volatile Memory Backup: OK 00:17:51.753 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:51.753 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:51.753 Available Spare: 0% 00:17:51.753 Available Spare Threshold: 0% 00:17:51.753 Life Percentage U[2024-12-07 22:49:06.451761] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.451768] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x2477ac0) 00:17:51.753 [2024-12-07 22:49:06.451777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.753 [2024-12-07 22:49:06.451800] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b1240, cid 7, qid 0 00:17:51.753 [2024-12-07 22:49:06.451847] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.753 [2024-12-07 22:49:06.451855] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.753 [2024-12-07 22:49:06.451859] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.451863] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b1240) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.451917] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:51.753 [2024-12-07 22:49:06.455939] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b07c0) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.455976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.753 [2024-12-07 22:49:06.456000] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0940) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.456005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.753 [2024-12-07 22:49:06.456011] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0ac0) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.456016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.753 [2024-12-07 22:49:06.456021] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.456026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.753 [2024-12-07 22:49:06.456038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456043] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456047] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.753 [2024-12-07 22:49:06.456056] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.753 [2024-12-07 22:49:06.456085] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.753 [2024-12-07 22:49:06.456134] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.753 [2024-12-07 22:49:06.456142] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.753 [2024-12-07 22:49:06.456147] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456151] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.456159] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456164] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456168] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.753 [2024-12-07 22:49:06.456176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.753 [2024-12-07 22:49:06.456199] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.753 [2024-12-07 22:49:06.456272] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.753 [2024-12-07 22:49:06.456280] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.753 [2024-12-07 22:49:06.456284] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456288] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.456293] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:51.753 [2024-12-07 22:49:06.456298] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:51.753 [2024-12-07 22:49:06.456309] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456315] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456318] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.753 [2024-12-07 22:49:06.456341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.753 [2024-12-07 22:49:06.456358] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.753 [2024-12-07 22:49:06.456404] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.753 [2024-12-07 22:49:06.456411] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.753 [2024-12-07 22:49:06.456415] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456419] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.753 [2024-12-07 22:49:06.456431] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456436] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.753 [2024-12-07 22:49:06.456440] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.753 [2024-12-07 22:49:06.456447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.753 [2024-12-07 22:49:06.456464] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.753 [2024-12-07 22:49:06.456508] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.753 [2024-12-07 22:49:06.456515] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.456519] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456523] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.456534] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456539] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456542] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.456550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.456567] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.456617] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.456624] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.456628] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456632] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.456643] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456648] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456652] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.456659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.456676] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.456727] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.456735] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.456738] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456743] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.456753] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456758] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456762] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.456769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.456786] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.456830] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.456837] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.456841] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456845] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.456856] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456865] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.456872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.456889] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.456945] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.456953] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.456957] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456961] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.456972] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456978] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.456981] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.456989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457008] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457055] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.457062] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.457066] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457070] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.457081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457086] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457090] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.457097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457114] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457157] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.457165] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.457169] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457173] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.457184] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457189] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457192] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.457200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457217] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457261] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.457268] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.457272] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457276] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.457287] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457292] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457295] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.457303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457320] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457361] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.457368] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.457372] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457376] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.457387] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457392] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457396] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.457403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457420] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457465] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.457472] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.457476] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457480] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.457491] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457496] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457500] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.457507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457524] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457568] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.457575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.457579] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457583] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.457594] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457599] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.457610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457674] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.754 [2024-12-07 22:49:06.457681] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.754 [2024-12-07 22:49:06.457685] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457689] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.754 [2024-12-07 22:49:06.457700] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457705] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.754 [2024-12-07 22:49:06.457708] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.754 [2024-12-07 22:49:06.457716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.754 [2024-12-07 22:49:06.457733] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.754 [2024-12-07 22:49:06.457773] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.457780] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.457784] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.457789] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.457799] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.457804] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.457808] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.457815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.457832] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.457906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.457915] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.457919] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.457924] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.457935] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.457940] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.457944] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.457952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.457971] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458020] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458027] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458031] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458036] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458047] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458052] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458056] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458081] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458123] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458131] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458135] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458139] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458150] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458155] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458159] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458184] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458235] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458243] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458246] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458251] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458262] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458267] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458310] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458359] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458363] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458367] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458378] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458382] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458386] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458411] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458457] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458465] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458468] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458472] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458483] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458488] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458492] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458516] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458557] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458564] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458568] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458572] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458582] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458587] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458591] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458615] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458662] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458669] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458673] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458677] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458687] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458692] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458696] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458720] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458761] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458768] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458772] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458776] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458787] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458792] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458796] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458820] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.458861] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.458868] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.458872] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458876] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.458896] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458903] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.458923] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.458931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.755 [2024-12-07 22:49:06.458951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.755 [2024-12-07 22:49:06.459001] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.755 [2024-12-07 22:49:06.459008] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.755 [2024-12-07 22:49:06.459012] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.459017] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.755 [2024-12-07 22:49:06.459028] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.459033] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.755 [2024-12-07 22:49:06.459037] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.755 [2024-12-07 22:49:06.459044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.459062] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.459110] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.459118] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.459122] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459126] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.459137] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459142] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459146] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.756 [2024-12-07 22:49:06.459154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.459200] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.459248] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.459256] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.459260] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459265] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.459276] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459286] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.756 [2024-12-07 22:49:06.459294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.459312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.459359] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.459366] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.459370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459375] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.459386] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459391] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459395] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.756 [2024-12-07 22:49:06.459403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.459421] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.459499] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.459506] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.459510] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459514] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.459524] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459529] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459533] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.756 [2024-12-07 22:49:06.459540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.459557] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.459617] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.459626] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.459630] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459634] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.459645] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459650] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459654] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.756 [2024-12-07 22:49:06.459661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.459678] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.459725] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.459741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.459746] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459750] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.459761] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459767] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459771] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.756 [2024-12-07 22:49:06.459778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.459796] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.459838] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.459850] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.459855] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.459859] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.463943] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.463961] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.463966] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2477ac0) 00:17:51.756 [2024-12-07 22:49:06.463975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:51.756 [2024-12-07 22:49:06.464001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24b0c40, cid 3, qid 0 00:17:51.756 [2024-12-07 22:49:06.464074] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:51.756 [2024-12-07 22:49:06.464081] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:51.756 [2024-12-07 22:49:06.464086] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:51.756 [2024-12-07 22:49:06.464090] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24b0c40) on tqpair=0x2477ac0 00:17:51.756 [2024-12-07 22:49:06.464099] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:17:51.756 sed: 0% 00:17:51.756 Data Units Read: 0 00:17:51.756 Data Units Written: 0 00:17:51.756 Host Read Commands: 0 00:17:51.756 Host Write Commands: 0 00:17:51.756 Controller Busy Time: 0 minutes 00:17:51.756 Power Cycles: 0 00:17:51.756 Power On Hours: 0 hours 00:17:51.756 Unsafe Shutdowns: 0 00:17:51.756 Unrecoverable Media Errors: 0 00:17:51.756 Lifetime Error Log Entries: 0 00:17:51.756 Warning Temperature Time: 0 minutes 00:17:51.756 Critical Temperature Time: 0 minutes 00:17:51.756 00:17:51.756 Number of Queues 00:17:51.756 ================ 00:17:51.756 Number of I/O Submission Queues: 127 00:17:51.756 Number of I/O Completion Queues: 127 00:17:51.756 00:17:51.756 Active Namespaces 00:17:51.756 ================= 00:17:51.756 Namespace ID:1 00:17:51.756 Error Recovery Timeout: Unlimited 00:17:51.756 Command Set Identifier: NVM (00h) 00:17:51.756 Deallocate: Supported 00:17:51.756 Deallocated/Unwritten Error: Not Supported 00:17:51.756 Deallocated Read Value: Unknown 00:17:51.756 Deallocate in Write Zeroes: Not Supported 00:17:51.756 Deallocated Guard Field: 0xFFFF 00:17:51.756 Flush: Supported 00:17:51.756 Reservation: Supported 00:17:51.756 Namespace Sharing Capabilities: Multiple Controllers 00:17:51.756 Size (in LBAs): 131072 (0GiB) 00:17:51.756 Capacity (in LBAs): 131072 (0GiB) 00:17:51.756 Utilization (in LBAs): 131072 (0GiB) 00:17:51.756 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:51.756 EUI64: ABCDEF0123456789 00:17:51.756 UUID: f870e712-d798-41d3-b438-11d2ea77f8ec 00:17:51.756 Thin Provisioning: Not Supported 00:17:51.756 Per-NS Atomic Units: Yes 00:17:51.756 Atomic Boundary Size (Normal): 0 00:17:51.756 Atomic Boundary Size (PFail): 0 00:17:51.756 Atomic Boundary Offset: 0 00:17:51.756 Maximum Single Source Range Length: 65535 00:17:51.756 Maximum Copy Length: 65535 00:17:51.756 Maximum Source Range Count: 1 00:17:51.756 NGUID/EUI64 Never Reused: No 00:17:51.756 Namespace Write Protected: No 00:17:51.756 Number of LBA Formats: 1 00:17:51.756 Current LBA Format: LBA Format #00 00:17:51.756 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:51.756 00:17:51.756 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.017 rmmod nvme_tcp 00:17:52.017 rmmod nvme_fabrics 00:17:52.017 rmmod nvme_keyring 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 87852 ']' 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 87852 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 87852 ']' 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 87852 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87852 00:17:52.017 killing process with pid 87852 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87852' 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 87852 00:17:52.017 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 87852 00:17:52.275 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.276 22:49:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.276 22:49:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.276 22:49:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.276 22:49:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.276 22:49:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:52.534 ************************************ 00:17:52.534 END TEST nvmf_identify 00:17:52.534 ************************************ 00:17:52.534 00:17:52.534 real 0m2.072s 00:17:52.534 user 0m4.210s 00:17:52.534 sys 0m0.681s 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:52.534 ************************************ 00:17:52.534 START TEST nvmf_perf 00:17:52.534 ************************************ 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:52.534 * Looking for test storage... 00:17:52.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.534 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:52.534 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.534 --rc genhtml_branch_coverage=1 00:17:52.535 --rc genhtml_function_coverage=1 00:17:52.535 --rc genhtml_legend=1 00:17:52.535 --rc geninfo_all_blocks=1 00:17:52.535 --rc geninfo_unexecuted_blocks=1 00:17:52.535 00:17:52.535 ' 00:17:52.535 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:52.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.535 --rc genhtml_branch_coverage=1 00:17:52.535 --rc genhtml_function_coverage=1 00:17:52.535 --rc genhtml_legend=1 00:17:52.535 --rc geninfo_all_blocks=1 00:17:52.535 --rc geninfo_unexecuted_blocks=1 00:17:52.535 00:17:52.535 ' 00:17:52.535 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:52.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.535 --rc genhtml_branch_coverage=1 00:17:52.535 --rc genhtml_function_coverage=1 00:17:52.535 --rc genhtml_legend=1 00:17:52.535 --rc geninfo_all_blocks=1 00:17:52.535 --rc geninfo_unexecuted_blocks=1 00:17:52.535 00:17:52.535 ' 00:17:52.535 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:52.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.535 --rc genhtml_branch_coverage=1 00:17:52.535 --rc genhtml_function_coverage=1 00:17:52.535 --rc genhtml_legend=1 00:17:52.535 --rc geninfo_all_blocks=1 00:17:52.535 --rc geninfo_unexecuted_blocks=1 00:17:52.535 00:17:52.535 ' 00:17:52.535 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.794 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.794 Cannot find device "nvmf_init_br" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.794 Cannot find device "nvmf_init_br2" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.794 Cannot find device "nvmf_tgt_br" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.794 Cannot find device "nvmf_tgt_br2" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.794 Cannot find device "nvmf_init_br" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.794 Cannot find device "nvmf_init_br2" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.794 Cannot find device "nvmf_tgt_br" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:52.794 Cannot find device "nvmf_tgt_br2" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:52.794 Cannot find device "nvmf_br" 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:52.794 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:52.795 Cannot find device "nvmf_init_if" 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:52.795 Cannot find device "nvmf_init_if2" 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.795 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.795 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:53.053 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:53.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:53.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:17:53.054 00:17:53.054 --- 10.0.0.3 ping statistics --- 00:17:53.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.054 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:53.054 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:53.054 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.126 ms 00:17:53.054 00:17:53.054 --- 10.0.0.4 ping statistics --- 00:17:53.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.054 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:53.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:53.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:17:53.054 00:17:53.054 --- 10.0.0.1 ping statistics --- 00:17:53.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.054 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:53.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:53.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:17:53.054 00:17:53.054 --- 10.0.0.2 ping statistics --- 00:17:53.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:53.054 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=88106 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 88106 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 88106 ']' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:53.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:53.054 22:49:07 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.054 [2024-12-07 22:49:07.805206] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:17:53.054 [2024-12-07 22:49:07.805333] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.313 [2024-12-07 22:49:07.942703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:53.313 [2024-12-07 22:49:07.984543] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.313 [2024-12-07 22:49:07.984625] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.313 [2024-12-07 22:49:07.984659] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.313 [2024-12-07 22:49:07.984671] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.313 [2024-12-07 22:49:07.984682] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.313 [2024-12-07 22:49:07.984859] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.313 [2024-12-07 22:49:07.985046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.313 [2024-12-07 22:49:07.985248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:53.313 [2024-12-07 22:49:07.985266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.313 [2024-12-07 22:49:08.016319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:53.313 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:53.313 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:17:53.313 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:53.313 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:53.313 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:53.571 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.571 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:53.571 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:53.830 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:53.830 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:54.397 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:54.397 22:49:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:54.657 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:54.657 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:17:54.657 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:54.657 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:54.657 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:54.916 [2024-12-07 22:49:09.424512] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.916 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:55.174 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:55.174 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:55.433 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:55.433 22:49:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:55.693 22:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:55.952 [2024-12-07 22:49:10.477739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:55.952 22:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:56.211 22:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:17:56.211 22:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:56.211 22:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:56.211 22:49:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:17:57.149 Initializing NVMe Controllers 00:17:57.149 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:17:57.149 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:17:57.149 Initialization complete. Launching workers. 00:17:57.149 ======================================================== 00:17:57.149 Latency(us) 00:17:57.149 Device Information : IOPS MiB/s Average min max 00:17:57.149 PCIE (0000:00:10.0) NSID 1 from core 0: 22620.20 88.36 1414.94 258.79 8251.46 00:17:57.149 ======================================================== 00:17:57.149 Total : 22620.20 88.36 1414.94 258.79 8251.46 00:17:57.149 00:17:57.149 22:49:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:58.534 Initializing NVMe Controllers 00:17:58.534 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:58.534 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:58.534 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:58.534 Initialization complete. Launching workers. 00:17:58.534 ======================================================== 00:17:58.534 Latency(us) 00:17:58.534 Device Information : IOPS MiB/s Average min max 00:17:58.534 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3676.96 14.36 271.62 94.14 7181.44 00:17:58.534 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8038.73 4990.89 12016.47 00:17:58.534 ======================================================== 00:17:58.534 Total : 3801.96 14.85 526.98 94.14 12016.47 00:17:58.534 00:17:58.534 22:49:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:17:59.955 Initializing NVMe Controllers 00:17:59.955 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:59.955 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:59.955 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:59.955 Initialization complete. Launching workers. 00:17:59.955 ======================================================== 00:17:59.955 Latency(us) 00:17:59.955 Device Information : IOPS MiB/s Average min max 00:17:59.955 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9111.79 35.59 3511.86 519.64 9717.37 00:17:59.956 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3869.06 15.11 8303.25 4460.87 16778.04 00:17:59.956 ======================================================== 00:17:59.956 Total : 12980.85 50.71 4939.98 519.64 16778.04 00:17:59.956 00:17:59.956 22:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:17:59.956 22:49:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:02.492 Initializing NVMe Controllers 00:18:02.492 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.492 Controller IO queue size 128, less than required. 00:18:02.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.492 Controller IO queue size 128, less than required. 00:18:02.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.492 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:02.492 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:02.492 Initialization complete. Launching workers. 00:18:02.492 ======================================================== 00:18:02.492 Latency(us) 00:18:02.492 Device Information : IOPS MiB/s Average min max 00:18:02.492 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1958.30 489.58 66331.24 34994.73 97078.59 00:18:02.492 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 670.56 167.64 198513.63 56011.12 317346.94 00:18:02.492 ======================================================== 00:18:02.492 Total : 2628.87 657.22 100047.91 34994.73 317346.94 00:18:02.492 00:18:02.492 22:49:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:02.492 Initializing NVMe Controllers 00:18:02.492 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:02.492 Controller IO queue size 128, less than required. 00:18:02.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.492 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:02.492 Controller IO queue size 128, less than required. 00:18:02.492 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:02.492 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:02.492 WARNING: Some requested NVMe devices were skipped 00:18:02.492 No valid NVMe controllers or AIO or URING devices found 00:18:02.492 22:49:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:05.030 Initializing NVMe Controllers 00:18:05.030 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.030 Controller IO queue size 128, less than required. 00:18:05.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.030 Controller IO queue size 128, less than required. 00:18:05.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:05.030 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.030 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:05.030 Initialization complete. Launching workers. 00:18:05.030 00:18:05.030 ==================== 00:18:05.030 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:05.030 TCP transport: 00:18:05.030 polls: 10243 00:18:05.030 idle_polls: 5365 00:18:05.030 sock_completions: 4878 00:18:05.030 nvme_completions: 7429 00:18:05.030 submitted_requests: 11100 00:18:05.030 queued_requests: 1 00:18:05.030 00:18:05.030 ==================== 00:18:05.030 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:05.030 TCP transport: 00:18:05.030 polls: 10521 00:18:05.030 idle_polls: 6236 00:18:05.030 sock_completions: 4285 00:18:05.030 nvme_completions: 7013 00:18:05.030 submitted_requests: 10568 00:18:05.030 queued_requests: 1 00:18:05.030 ======================================================== 00:18:05.030 Latency(us) 00:18:05.030 Device Information : IOPS MiB/s Average min max 00:18:05.030 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1850.79 462.70 70133.61 40219.79 107536.80 00:18:05.030 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1747.14 436.78 74392.82 29608.55 128985.56 00:18:05.030 ======================================================== 00:18:05.030 Total : 3597.92 899.48 72201.86 29608.55 128985.56 00:18:05.030 00:18:05.030 22:49:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:05.030 22:49:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.289 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:05.289 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:05.289 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=8957fe95-54d5-49f3-93b6-4dcf147edcfc 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 8957fe95-54d5-49f3-93b6-4dcf147edcfc 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=8957fe95-54d5-49f3-93b6-4dcf147edcfc 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:05.857 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:05.857 { 00:18:05.857 "uuid": "8957fe95-54d5-49f3-93b6-4dcf147edcfc", 00:18:05.857 "name": "lvs_0", 00:18:05.857 "base_bdev": "Nvme0n1", 00:18:05.857 "total_data_clusters": 1278, 00:18:05.857 "free_clusters": 1278, 00:18:05.857 "block_size": 4096, 00:18:05.857 "cluster_size": 4194304 00:18:05.857 } 00:18:05.857 ]' 00:18:06.116 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8957fe95-54d5-49f3-93b6-4dcf147edcfc") .free_clusters' 00:18:06.117 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:18:06.117 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8957fe95-54d5-49f3-93b6-4dcf147edcfc") .cluster_size' 00:18:06.117 5112 00:18:06.117 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:06.117 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:18:06.117 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:18:06.117 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:06.117 22:49:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8957fe95-54d5-49f3-93b6-4dcf147edcfc lbd_0 5112 00:18:06.376 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=e538ff3d-fa05-4018-be4c-71e483c46641 00:18:06.376 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore e538ff3d-fa05-4018-be4c-71e483c46641 lvs_n_0 00:18:06.634 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=0c3986a4-3857-4d7f-b454-8c855ebd3a20 00:18:06.634 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 0c3986a4-3857-4d7f-b454-8c855ebd3a20 00:18:06.634 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=0c3986a4-3857-4d7f-b454-8c855ebd3a20 00:18:06.634 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:06.634 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:06.634 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:06.634 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.893 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:06.893 { 00:18:06.893 "uuid": "8957fe95-54d5-49f3-93b6-4dcf147edcfc", 00:18:06.893 "name": "lvs_0", 00:18:06.893 "base_bdev": "Nvme0n1", 00:18:06.893 "total_data_clusters": 1278, 00:18:06.893 "free_clusters": 0, 00:18:06.893 "block_size": 4096, 00:18:06.893 "cluster_size": 4194304 00:18:06.893 }, 00:18:06.893 { 00:18:06.893 "uuid": "0c3986a4-3857-4d7f-b454-8c855ebd3a20", 00:18:06.893 "name": "lvs_n_0", 00:18:06.893 "base_bdev": "e538ff3d-fa05-4018-be4c-71e483c46641", 00:18:06.893 "total_data_clusters": 1276, 00:18:06.893 "free_clusters": 1276, 00:18:06.893 "block_size": 4096, 00:18:06.893 "cluster_size": 4194304 00:18:06.893 } 00:18:06.893 ]' 00:18:06.893 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0c3986a4-3857-4d7f-b454-8c855ebd3a20") .free_clusters' 00:18:07.153 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:18:07.153 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0c3986a4-3857-4d7f-b454-8c855ebd3a20") .cluster_size' 00:18:07.153 5104 00:18:07.153 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:07.153 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:18:07.153 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:18:07.153 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:07.153 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0c3986a4-3857-4d7f-b454-8c855ebd3a20 lbd_nest_0 5104 00:18:07.412 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=0ded8018-a9eb-4e9f-bc16-b7783ce7e489 00:18:07.412 22:49:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.670 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:07.670 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 0ded8018-a9eb-4e9f-bc16-b7783ce7e489 00:18:07.929 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:08.187 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:08.187 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:08.187 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:08.187 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:08.187 22:49:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:08.446 Initializing NVMe Controllers 00:18:08.446 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.446 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:08.446 WARNING: Some requested NVMe devices were skipped 00:18:08.446 No valid NVMe controllers or AIO or URING devices found 00:18:08.446 22:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:08.446 22:49:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:20.666 Initializing NVMe Controllers 00:18:20.666 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.666 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:20.666 Initialization complete. Launching workers. 00:18:20.666 ======================================================== 00:18:20.666 Latency(us) 00:18:20.666 Device Information : IOPS MiB/s Average min max 00:18:20.666 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 971.60 121.45 1028.85 331.81 8403.97 00:18:20.666 ======================================================== 00:18:20.666 Total : 971.60 121.45 1028.85 331.81 8403.97 00:18:20.666 00:18:20.666 22:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:20.666 22:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:20.666 22:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:20.666 Initializing NVMe Controllers 00:18:20.666 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:20.666 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:20.666 WARNING: Some requested NVMe devices were skipped 00:18:20.666 No valid NVMe controllers or AIO or URING devices found 00:18:20.666 22:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:20.666 22:49:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:30.721 Initializing NVMe Controllers 00:18:30.721 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.721 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.721 Initialization complete. Launching workers. 00:18:30.721 ======================================================== 00:18:30.721 Latency(us) 00:18:30.721 Device Information : IOPS MiB/s Average min max 00:18:30.721 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1355.10 169.39 23658.60 5236.71 60045.41 00:18:30.721 ======================================================== 00:18:30.721 Total : 1355.10 169.39 23658.60 5236.71 60045.41 00:18:30.721 00:18:30.721 22:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:30.721 22:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:30.721 22:49:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:30.721 Initializing NVMe Controllers 00:18:30.721 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.721 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:30.721 WARNING: Some requested NVMe devices were skipped 00:18:30.721 No valid NVMe controllers or AIO or URING devices found 00:18:30.721 22:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:30.721 22:49:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:40.697 Initializing NVMe Controllers 00:18:40.697 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:40.697 Controller IO queue size 128, less than required. 00:18:40.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:40.697 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:40.697 Initialization complete. Launching workers. 00:18:40.697 ======================================================== 00:18:40.697 Latency(us) 00:18:40.697 Device Information : IOPS MiB/s Average min max 00:18:40.697 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4162.68 520.34 30784.54 10064.68 68190.05 00:18:40.697 ======================================================== 00:18:40.697 Total : 4162.68 520.34 30784.54 10064.68 68190.05 00:18:40.697 00:18:40.697 22:49:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:40.697 22:49:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0ded8018-a9eb-4e9f-bc16-b7783ce7e489 00:18:40.697 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:40.697 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete e538ff3d-fa05-4018-be4c-71e483c46641 00:18:40.955 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:41.214 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:41.214 rmmod nvme_tcp 00:18:41.214 rmmod nvme_fabrics 00:18:41.214 rmmod nvme_keyring 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 88106 ']' 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 88106 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 88106 ']' 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 88106 00:18:41.473 22:49:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:41.473 22:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.473 22:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88106 00:18:41.473 killing process with pid 88106 00:18:41.473 22:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.473 22:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.473 22:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88106' 00:18:41.473 22:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 88106 00:18:41.473 22:49:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 88106 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:43.378 00:18:43.378 real 0m50.803s 00:18:43.378 user 3m10.518s 00:18:43.378 sys 0m12.307s 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 ************************************ 00:18:43.378 END TEST nvmf_perf 00:18:43.378 ************************************ 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.378 ************************************ 00:18:43.378 START TEST nvmf_fio_host 00:18:43.378 ************************************ 00:18:43.378 22:49:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:43.378 * Looking for test storage... 00:18:43.378 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.378 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:43.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.639 --rc genhtml_branch_coverage=1 00:18:43.639 --rc genhtml_function_coverage=1 00:18:43.639 --rc genhtml_legend=1 00:18:43.639 --rc geninfo_all_blocks=1 00:18:43.639 --rc geninfo_unexecuted_blocks=1 00:18:43.639 00:18:43.639 ' 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:43.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.639 --rc genhtml_branch_coverage=1 00:18:43.639 --rc genhtml_function_coverage=1 00:18:43.639 --rc genhtml_legend=1 00:18:43.639 --rc geninfo_all_blocks=1 00:18:43.639 --rc geninfo_unexecuted_blocks=1 00:18:43.639 00:18:43.639 ' 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:43.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.639 --rc genhtml_branch_coverage=1 00:18:43.639 --rc genhtml_function_coverage=1 00:18:43.639 --rc genhtml_legend=1 00:18:43.639 --rc geninfo_all_blocks=1 00:18:43.639 --rc geninfo_unexecuted_blocks=1 00:18:43.639 00:18:43.639 ' 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:43.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.639 --rc genhtml_branch_coverage=1 00:18:43.639 --rc genhtml_function_coverage=1 00:18:43.639 --rc genhtml_legend=1 00:18:43.639 --rc geninfo_all_blocks=1 00:18:43.639 --rc geninfo_unexecuted_blocks=1 00:18:43.639 00:18:43.639 ' 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.639 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:43.640 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:43.640 Cannot find device "nvmf_init_br" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:43.640 Cannot find device "nvmf_init_br2" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:43.640 Cannot find device "nvmf_tgt_br" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:43.640 Cannot find device "nvmf_tgt_br2" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:43.640 Cannot find device "nvmf_init_br" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:43.640 Cannot find device "nvmf_init_br2" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:43.640 Cannot find device "nvmf_tgt_br" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:43.640 Cannot find device "nvmf_tgt_br2" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:43.640 Cannot find device "nvmf_br" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:43.640 Cannot find device "nvmf_init_if" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:43.640 Cannot find device "nvmf_init_if2" 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:43.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:43.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:43.640 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:43.900 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:43.900 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.084 ms 00:18:43.900 00:18:43.900 --- 10.0.0.3 ping statistics --- 00:18:43.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.900 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:43.900 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:43.900 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.169 ms 00:18:43.900 00:18:43.900 --- 10.0.0.4 ping statistics --- 00:18:43.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.900 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:43.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:18:43.900 00:18:43.900 --- 10.0.0.1 ping statistics --- 00:18:43.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.900 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:43.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:18:43.900 00:18:43.900 --- 10.0.0.2 ping statistics --- 00:18:43.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.900 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88961 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88961 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 88961 ']' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:43.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:43.900 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.160 [2024-12-07 22:49:58.698319] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:18:44.160 [2024-12-07 22:49:58.698409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:44.160 [2024-12-07 22:49:58.839647] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:44.160 [2024-12-07 22:49:58.883535] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:44.160 [2024-12-07 22:49:58.883598] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:44.160 [2024-12-07 22:49:58.883612] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:44.160 [2024-12-07 22:49:58.883622] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:44.160 [2024-12-07 22:49:58.883632] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:44.160 [2024-12-07 22:49:58.884399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.161 [2024-12-07 22:49:58.884542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.161 [2024-12-07 22:49:58.884679] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:44.161 [2024-12-07 22:49:58.884686] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.161 [2024-12-07 22:49:58.920605] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.420 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:44.420 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:44.420 22:49:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:44.680 [2024-12-07 22:49:59.231136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.680 22:49:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:44.680 22:49:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.680 22:49:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:44.680 22:49:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:44.939 Malloc1 00:18:44.939 22:49:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:45.215 22:49:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:45.474 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:45.734 [2024-12-07 22:50:00.341627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:45.734 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:45.993 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:45.994 22:50:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:46.252 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:46.252 fio-3.35 00:18:46.252 Starting 1 thread 00:18:48.786 00:18:48.786 test: (groupid=0, jobs=1): err= 0: pid=89031: Sat Dec 7 22:50:03 2024 00:18:48.786 read: IOPS=9497, BW=37.1MiB/s (38.9MB/s)(74.4MiB/2006msec) 00:18:48.786 slat (nsec): min=1782, max=312004, avg=2211.88, stdev=3177.00 00:18:48.786 clat (usec): min=2519, max=12237, avg=7027.64, stdev=565.53 00:18:48.786 lat (usec): min=2564, max=12239, avg=7029.86, stdev=565.40 00:18:48.786 clat percentiles (usec): 00:18:48.786 | 1.00th=[ 5866], 5.00th=[ 6259], 10.00th=[ 6390], 20.00th=[ 6587], 00:18:48.786 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:18:48.786 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7963], 00:18:48.786 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[10290], 99.95th=[11600], 00:18:48.786 | 99.99th=[12256] 00:18:48.786 bw ( KiB/s): min=36976, max=38840, per=99.98%, avg=37980.00, stdev=807.82, samples=4 00:18:48.786 iops : min= 9244, max= 9710, avg=9495.00, stdev=201.95, samples=4 00:18:48.786 write: IOPS=9505, BW=37.1MiB/s (38.9MB/s)(74.5MiB/2006msec); 0 zone resets 00:18:48.786 slat (nsec): min=1859, max=223114, avg=2320.27, stdev=2246.46 00:18:48.786 clat (usec): min=2378, max=12306, avg=6404.54, stdev=528.96 00:18:48.786 lat (usec): min=2392, max=12309, avg=6406.86, stdev=528.95 00:18:48.786 clat percentiles (usec): 00:18:48.786 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 5997], 00:18:48.786 | 30.00th=[ 6128], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6456], 00:18:48.786 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7242], 00:18:48.786 | 99.00th=[ 7832], 99.50th=[ 8225], 99.90th=[10290], 99.95th=[11469], 00:18:48.786 | 99.99th=[12256] 00:18:48.786 bw ( KiB/s): min=37760, max=38472, per=99.95%, avg=38004.00, stdev=324.14, samples=4 00:18:48.786 iops : min= 9440, max= 9618, avg=9501.00, stdev=81.03, samples=4 00:18:48.786 lat (msec) : 4=0.10%, 10=99.76%, 20=0.13% 00:18:48.786 cpu : usr=69.98%, sys=23.64%, ctx=10, majf=0, minf=6 00:18:48.786 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:48.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:48.786 issued rwts: total=19051,19068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.786 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:48.786 00:18:48.786 Run status group 0 (all jobs): 00:18:48.786 READ: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.4MiB (78.0MB), run=2006-2006msec 00:18:48.786 WRITE: bw=37.1MiB/s (38.9MB/s), 37.1MiB/s-37.1MiB/s (38.9MB/s-38.9MB/s), io=74.5MiB (78.1MB), run=2006-2006msec 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:48.786 22:50:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:48.786 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:48.786 fio-3.35 00:18:48.786 Starting 1 thread 00:18:51.322 00:18:51.322 test: (groupid=0, jobs=1): err= 0: pid=89074: Sat Dec 7 22:50:05 2024 00:18:51.322 read: IOPS=8921, BW=139MiB/s (146MB/s)(280MiB/2006msec) 00:18:51.322 slat (usec): min=2, max=159, avg= 3.55, stdev= 2.56 00:18:51.322 clat (usec): min=1864, max=16292, avg=7888.05, stdev=2349.85 00:18:51.322 lat (usec): min=1867, max=16297, avg=7891.61, stdev=2350.02 00:18:51.322 clat percentiles (usec): 00:18:51.322 | 1.00th=[ 3621], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5800], 00:18:51.322 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7701], 60.00th=[ 8291], 00:18:51.322 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10945], 95.00th=[12256], 00:18:51.322 | 99.00th=[14222], 99.50th=[15008], 99.90th=[15926], 99.95th=[16057], 00:18:51.322 | 99.99th=[16319] 00:18:51.322 bw ( KiB/s): min=65440, max=77216, per=50.22%, avg=71685.50, stdev=5585.23, samples=4 00:18:51.322 iops : min= 4090, max= 4826, avg=4480.25, stdev=349.15, samples=4 00:18:51.322 write: IOPS=5231, BW=81.7MiB/s (85.7MB/s)(146MiB/1785msec); 0 zone resets 00:18:51.322 slat (usec): min=31, max=273, avg=36.58, stdev= 9.07 00:18:51.322 clat (usec): min=5093, max=20409, avg=11540.67, stdev=2271.44 00:18:51.322 lat (usec): min=5125, max=20441, avg=11577.25, stdev=2274.67 00:18:51.322 clat percentiles (usec): 00:18:51.322 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:18:51.322 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:18:51.322 | 70.00th=[12649], 80.00th=[13566], 90.00th=[14877], 95.00th=[15664], 00:18:51.322 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19268], 99.95th=[19792], 00:18:51.322 | 99.99th=[20317] 00:18:51.322 bw ( KiB/s): min=69920, max=78688, per=89.21%, avg=74676.00, stdev=4558.16, samples=4 00:18:51.322 iops : min= 4370, max= 4918, avg=4667.25, stdev=284.89, samples=4 00:18:51.322 lat (msec) : 2=0.01%, 4=1.41%, 10=62.51%, 20=36.06%, 50=0.01% 00:18:51.322 cpu : usr=82.14%, sys=12.67%, ctx=714, majf=0, minf=2 00:18:51.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:18:51.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.322 issued rwts: total=17896,9339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.322 00:18:51.322 Run status group 0 (all jobs): 00:18:51.322 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=280MiB (293MB), run=2006-2006msec 00:18:51.322 WRITE: bw=81.7MiB/s (85.7MB/s), 81.7MiB/s-81.7MiB/s (85.7MB/s-85.7MB/s), io=146MiB (153MB), run=1785-1785msec 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:51.322 22:50:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:51.581 Nvme0n1 00:18:51.581 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:51.839 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=3142c217-970e-4761-9257-f29a94649b62 00:18:51.839 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 3142c217-970e-4761-9257-f29a94649b62 00:18:51.839 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=3142c217-970e-4761-9257-f29a94649b62 00:18:51.839 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:51.839 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:51.839 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:51.839 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:52.098 { 00:18:52.098 "uuid": "3142c217-970e-4761-9257-f29a94649b62", 00:18:52.098 "name": "lvs_0", 00:18:52.098 "base_bdev": "Nvme0n1", 00:18:52.098 "total_data_clusters": 4, 00:18:52.098 "free_clusters": 4, 00:18:52.098 "block_size": 4096, 00:18:52.098 "cluster_size": 1073741824 00:18:52.098 } 00:18:52.098 ]' 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="3142c217-970e-4761-9257-f29a94649b62") .free_clusters' 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="3142c217-970e-4761-9257-f29a94649b62") .cluster_size' 00:18:52.098 4096 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:52.098 22:50:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:52.357 bf803bf1-e97c-4979-a249-93081910d75d 00:18:52.357 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:52.614 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:52.873 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:53.132 22:50:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:53.391 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:53.391 fio-3.35 00:18:53.391 Starting 1 thread 00:18:55.921 00:18:55.921 test: (groupid=0, jobs=1): err= 0: pid=89187: Sat Dec 7 22:50:10 2024 00:18:55.921 read: IOPS=6229, BW=24.3MiB/s (25.5MB/s)(48.9MiB/2009msec) 00:18:55.921 slat (nsec): min=1898, max=304420, avg=2644.46, stdev=3981.04 00:18:55.921 clat (usec): min=2985, max=18779, avg=10753.11, stdev=874.49 00:18:55.921 lat (usec): min=2994, max=18781, avg=10755.75, stdev=874.16 00:18:55.921 clat percentiles (usec): 00:18:55.921 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:18:55.921 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:18:55.921 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[12125], 00:18:55.921 | 99.00th=[12649], 99.50th=[13173], 99.90th=[16450], 99.95th=[17695], 00:18:55.921 | 99.99th=[17957] 00:18:55.921 bw ( KiB/s): min=24056, max=25432, per=99.90%, avg=24896.00, stdev=589.83, samples=4 00:18:55.921 iops : min= 6014, max= 6358, avg=6224.00, stdev=147.46, samples=4 00:18:55.921 write: IOPS=6222, BW=24.3MiB/s (25.5MB/s)(48.8MiB/2009msec); 0 zone resets 00:18:55.921 slat (usec): min=2, max=186, avg= 2.78, stdev= 2.74 00:18:55.921 clat (usec): min=2409, max=17778, avg=9746.90, stdev=842.01 00:18:55.921 lat (usec): min=2423, max=17781, avg=9749.68, stdev=841.81 00:18:55.921 clat percentiles (usec): 00:18:55.921 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:18:55.921 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:18:55.921 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:18:55.921 | 99.00th=[11469], 99.50th=[11863], 99.90th=[16712], 99.95th=[17433], 00:18:55.921 | 99.99th=[17695] 00:18:55.921 bw ( KiB/s): min=24768, max=25032, per=99.98%, avg=24882.00, stdev=112.83, samples=4 00:18:55.921 iops : min= 6192, max= 6258, avg=6220.50, stdev=28.21, samples=4 00:18:55.921 lat (msec) : 4=0.06%, 10=39.99%, 20=59.95% 00:18:55.921 cpu : usr=74.90%, sys=19.97%, ctx=7, majf=0, minf=6 00:18:55.921 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:18:55.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:55.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:55.921 issued rwts: total=12516,12500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:55.921 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:55.921 00:18:55.921 Run status group 0 (all jobs): 00:18:55.921 READ: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.9MiB (51.3MB), run=2009-2009msec 00:18:55.921 WRITE: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=48.8MiB (51.2MB), run=2009-2009msec 00:18:55.921 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:55.921 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:18:56.488 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=11eeb3f6-54d0-4de8-8ca5-90ef309c56bd 00:18:56.488 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 11eeb3f6-54d0-4de8-8ca5-90ef309c56bd 00:18:56.488 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=11eeb3f6-54d0-4de8-8ca5-90ef309c56bd 00:18:56.488 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:56.488 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:56.488 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:56.488 22:50:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:56.488 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:56.488 { 00:18:56.488 "uuid": "3142c217-970e-4761-9257-f29a94649b62", 00:18:56.488 "name": "lvs_0", 00:18:56.488 "base_bdev": "Nvme0n1", 00:18:56.488 "total_data_clusters": 4, 00:18:56.488 "free_clusters": 0, 00:18:56.488 "block_size": 4096, 00:18:56.488 "cluster_size": 1073741824 00:18:56.488 }, 00:18:56.488 { 00:18:56.488 "uuid": "11eeb3f6-54d0-4de8-8ca5-90ef309c56bd", 00:18:56.488 "name": "lvs_n_0", 00:18:56.488 "base_bdev": "bf803bf1-e97c-4979-a249-93081910d75d", 00:18:56.488 "total_data_clusters": 1022, 00:18:56.488 "free_clusters": 1022, 00:18:56.488 "block_size": 4096, 00:18:56.488 "cluster_size": 4194304 00:18:56.488 } 00:18:56.488 ]' 00:18:56.488 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="11eeb3f6-54d0-4de8-8ca5-90ef309c56bd") .free_clusters' 00:18:56.488 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:18:56.488 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="11eeb3f6-54d0-4de8-8ca5-90ef309c56bd") .cluster_size' 00:18:56.747 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:56.747 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:18:56.747 4088 00:18:56.747 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:18:56.747 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:18:56.747 42656ff6-7782-4c4e-9d5e-dbdb9676c657 00:18:56.747 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:18:57.006 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:18:57.267 22:50:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:57.527 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:57.528 22:50:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:57.786 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:57.786 fio-3.35 00:18:57.786 Starting 1 thread 00:19:00.317 00:19:00.317 test: (groupid=0, jobs=1): err= 0: pid=89262: Sat Dec 7 22:50:14 2024 00:19:00.317 read: IOPS=5733, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec) 00:19:00.317 slat (usec): min=2, max=317, avg= 2.99, stdev= 4.33 00:19:00.317 clat (usec): min=3281, max=20757, avg=11673.10, stdev=955.46 00:19:00.317 lat (usec): min=3291, max=20760, avg=11676.09, stdev=955.13 00:19:00.317 clat percentiles (usec): 00:19:00.317 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:19:00.317 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863], 00:19:00.317 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:19:00.317 | 99.00th=[13698], 99.50th=[14222], 99.90th=[17171], 99.95th=[19268], 00:19:00.317 | 99.99th=[19792] 00:19:00.317 bw ( KiB/s): min=22016, max=23472, per=100.00%, avg=22940.00, stdev=649.32, samples=4 00:19:00.317 iops : min= 5504, max= 5868, avg=5735.00, stdev=162.33, samples=4 00:19:00.317 write: IOPS=5724, BW=22.4MiB/s (23.4MB/s)(44.9MiB/2010msec); 0 zone resets 00:19:00.317 slat (usec): min=2, max=303, avg= 3.07, stdev= 3.75 00:19:00.317 clat (usec): min=2546, max=20784, avg=10594.01, stdev=954.76 00:19:00.317 lat (usec): min=2560, max=20787, avg=10597.08, stdev=954.56 00:19:00.317 clat percentiles (usec): 00:19:00.317 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:19:00.317 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:19:00.317 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:19:00.317 | 99.00th=[12649], 99.50th=[13042], 99.90th=[18220], 99.95th=[19530], 00:19:00.317 | 99.99th=[20841] 00:19:00.317 bw ( KiB/s): min=22784, max=22984, per=99.87%, avg=22868.00, stdev=96.99, samples=4 00:19:00.317 iops : min= 5696, max= 5746, avg=5717.00, stdev=24.25, samples=4 00:19:00.317 lat (msec) : 4=0.05%, 10=13.31%, 20=86.63%, 50=0.01% 00:19:00.317 cpu : usr=72.87%, sys=21.40%, ctx=4, majf=0, minf=6 00:19:00.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:00.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.317 issued rwts: total=11525,11506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.317 00:19:00.317 Run status group 0 (all jobs): 00:19:00.317 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2010-2010msec 00:19:00.317 WRITE: bw=22.4MiB/s (23.4MB/s), 22.4MiB/s-22.4MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2010-2010msec 00:19:00.317 22:50:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:00.317 22:50:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:00.317 22:50:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:00.575 22:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:00.833 22:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:01.091 22:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:01.348 22:50:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:02.284 rmmod nvme_tcp 00:19:02.284 rmmod nvme_fabrics 00:19:02.284 rmmod nvme_keyring 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 88961 ']' 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 88961 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 88961 ']' 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 88961 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88961 00:19:02.284 killing process with pid 88961 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88961' 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 88961 00:19:02.284 22:50:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 88961 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:02.284 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:02.544 00:19:02.544 real 0m19.293s 00:19:02.544 user 1m23.860s 00:19:02.544 sys 0m4.402s 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:02.544 ************************************ 00:19:02.544 END TEST nvmf_fio_host 00:19:02.544 ************************************ 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:02.544 22:50:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:02.805 ************************************ 00:19:02.805 START TEST nvmf_failover 00:19:02.805 ************************************ 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:02.805 * Looking for test storage... 00:19:02.805 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:02.805 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:02.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.805 --rc genhtml_branch_coverage=1 00:19:02.805 --rc genhtml_function_coverage=1 00:19:02.805 --rc genhtml_legend=1 00:19:02.806 --rc geninfo_all_blocks=1 00:19:02.806 --rc geninfo_unexecuted_blocks=1 00:19:02.806 00:19:02.806 ' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.806 --rc genhtml_branch_coverage=1 00:19:02.806 --rc genhtml_function_coverage=1 00:19:02.806 --rc genhtml_legend=1 00:19:02.806 --rc geninfo_all_blocks=1 00:19:02.806 --rc geninfo_unexecuted_blocks=1 00:19:02.806 00:19:02.806 ' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.806 --rc genhtml_branch_coverage=1 00:19:02.806 --rc genhtml_function_coverage=1 00:19:02.806 --rc genhtml_legend=1 00:19:02.806 --rc geninfo_all_blocks=1 00:19:02.806 --rc geninfo_unexecuted_blocks=1 00:19:02.806 00:19:02.806 ' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:02.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:02.806 --rc genhtml_branch_coverage=1 00:19:02.806 --rc genhtml_function_coverage=1 00:19:02.806 --rc genhtml_legend=1 00:19:02.806 --rc geninfo_all_blocks=1 00:19:02.806 --rc geninfo_unexecuted_blocks=1 00:19:02.806 00:19:02.806 ' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:02.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:02.806 Cannot find device "nvmf_init_br" 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:02.806 Cannot find device "nvmf_init_br2" 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:02.806 Cannot find device "nvmf_tgt_br" 00:19:02.806 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:02.807 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:03.066 Cannot find device "nvmf_tgt_br2" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:03.066 Cannot find device "nvmf_init_br" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:03.066 Cannot find device "nvmf_init_br2" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:03.066 Cannot find device "nvmf_tgt_br" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:03.066 Cannot find device "nvmf_tgt_br2" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:03.066 Cannot find device "nvmf_br" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:03.066 Cannot find device "nvmf_init_if" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:03.066 Cannot find device "nvmf_init_if2" 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:03.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:03.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:03.066 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:03.067 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:03.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:03.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:19:03.326 00:19:03.326 --- 10.0.0.3 ping statistics --- 00:19:03.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.326 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:03.326 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:03.326 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:03.326 00:19:03.326 --- 10.0.0.4 ping statistics --- 00:19:03.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.326 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:03.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:03.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:19:03.326 00:19:03.326 --- 10.0.0.1 ping statistics --- 00:19:03.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.326 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:03.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:03.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:19:03.326 00:19:03.326 --- 10.0.0.2 ping statistics --- 00:19:03.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:03.326 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=89558 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 89558 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89558 ']' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:03.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:03.326 22:50:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:03.326 [2024-12-07 22:50:17.955673] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:03.326 [2024-12-07 22:50:17.955762] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.585 [2024-12-07 22:50:18.094634] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:03.585 [2024-12-07 22:50:18.127022] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.585 [2024-12-07 22:50:18.127072] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.585 [2024-12-07 22:50:18.127097] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.585 [2024-12-07 22:50:18.127104] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.585 [2024-12-07 22:50:18.127111] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.585 [2024-12-07 22:50:18.127228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.585 [2024-12-07 22:50:18.127383] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:03.586 [2024-12-07 22:50:18.127386] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.586 [2024-12-07 22:50:18.155270] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:03.586 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:03.586 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:03.586 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:03.586 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:03.586 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:03.586 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.586 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:03.843 [2024-12-07 22:50:18.534650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.843 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:04.101 Malloc0 00:19:04.101 22:50:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:04.359 22:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:04.616 22:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:04.874 [2024-12-07 22:50:19.549009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:04.874 22:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:05.132 [2024-12-07 22:50:19.781168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:05.132 22:50:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:05.391 [2024-12-07 22:50:20.017437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89614 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89614 /var/tmp/bdevperf.sock 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89614 ']' 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:05.391 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:05.650 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:05.650 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:05.650 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:05.908 NVMe0n1 00:19:06.167 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:06.426 00:19:06.426 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89624 00:19:06.426 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:06.426 22:50:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:07.362 22:50:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:07.621 22:50:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:10.998 22:50:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:10.998 00:19:10.998 22:50:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:11.257 22:50:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:14.542 22:50:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:14.542 [2024-12-07 22:50:29.200667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:14.542 22:50:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:15.476 22:50:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:16.043 22:50:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89624 00:19:21.330 { 00:19:21.330 "results": [ 00:19:21.330 { 00:19:21.330 "job": "NVMe0n1", 00:19:21.330 "core_mask": "0x1", 00:19:21.330 "workload": "verify", 00:19:21.330 "status": "finished", 00:19:21.330 "verify_range": { 00:19:21.330 "start": 0, 00:19:21.330 "length": 16384 00:19:21.330 }, 00:19:21.330 "queue_depth": 128, 00:19:21.330 "io_size": 4096, 00:19:21.330 "runtime": 15.009426, 00:19:21.330 "iops": 10036.293193357295, 00:19:21.330 "mibps": 39.20427028655193, 00:19:21.330 "io_failed": 3405, 00:19:21.330 "io_timeout": 0, 00:19:21.330 "avg_latency_us": 12443.458180212972, 00:19:21.330 "min_latency_us": 547.3745454545455, 00:19:21.330 "max_latency_us": 15847.796363636364 00:19:21.330 } 00:19:21.330 ], 00:19:21.330 "core_count": 1 00:19:21.330 } 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89614 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89614 ']' 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89614 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89614 00:19:21.608 killing process with pid 89614 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89614' 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89614 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89614 00:19:21.608 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:21.608 [2024-12-07 22:50:20.083898] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:21.608 [2024-12-07 22:50:20.084026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89614 ] 00:19:21.608 [2024-12-07 22:50:20.210836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.608 [2024-12-07 22:50:20.243774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.608 [2024-12-07 22:50:20.271246] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:21.608 Running I/O for 15 seconds... 00:19:21.608 7716.00 IOPS, 30.14 MiB/s [2024-12-07T22:50:36.374Z] [2024-12-07 22:50:22.247560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.608 [2024-12-07 22:50:22.247612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.608 [2024-12-07 22:50:22.247657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.608 [2024-12-07 22:50:22.247672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.608 [2024-12-07 22:50:22.247687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.608 [2024-12-07 22:50:22.247700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.608 [2024-12-07 22:50:22.247714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.608 [2024-12-07 22:50:22.247728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.608 [2024-12-07 22:50:22.247742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.608 [2024-12-07 22:50:22.247755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.608 [2024-12-07 22:50:22.247769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.608 [2024-12-07 22:50:22.247782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.608 [2024-12-07 22:50:22.247796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.247809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.247823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.247836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.247851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.609 [2024-12-07 22:50:22.247864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.247878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.247905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.247921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.247965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.247982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.247996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.248983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.248996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.609 [2024-12-07 22:50:22.249945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.609 [2024-12-07 22:50:22.249959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.249972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.249988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:72760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.610 [2024-12-07 22:50:22.250866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.250897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b26540 is same with the state(6) to be set 00:19:21.610 [2024-12-07 22:50:22.250924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.250937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.250974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72848 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.250988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72872 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72880 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72888 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72896 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251209] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72904 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72912 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72920 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72928 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72936 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72944 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72952 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72960 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72968 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72976 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.610 [2024-12-07 22:50:22.251715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.610 [2024-12-07 22:50:22.251725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72984 len:8 PRP1 0x0 PRP2 0x0 00:19:21.610 [2024-12-07 22:50:22.251738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251780] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b26540 was disconnected and freed. reset controller. 00:19:21.610 [2024-12-07 22:50:22.251800] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:21.610 [2024-12-07 22:50:22.251848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.610 [2024-12-07 22:50:22.251869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.610 [2024-12-07 22:50:22.251896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.610 [2024-12-07 22:50:22.251952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.610 [2024-12-07 22:50:22.251981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:22.251994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.610 [2024-12-07 22:50:22.252033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b05f10 (9): Bad file descriptor 00:19:21.610 [2024-12-07 22:50:22.255582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.610 [2024-12-07 22:50:22.289042] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.610 8533.50 IOPS, 33.33 MiB/s [2024-12-07T22:50:36.376Z] 9117.00 IOPS, 35.61 MiB/s [2024-12-07T22:50:36.376Z] 9444.50 IOPS, 36.89 MiB/s [2024-12-07T22:50:36.376Z] [2024-12-07 22:50:25.914522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.610 [2024-12-07 22:50:25.914830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.610 [2024-12-07 22:50:25.914845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.914858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.914872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.914912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.914939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.914972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.914988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.915871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.915971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.915988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.611 [2024-12-07 22:50:25.916588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.611 [2024-12-07 22:50:25.916968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.611 [2024-12-07 22:50:25.916982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.916997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:116608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:116656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.612 [2024-12-07 22:50:25.917536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:116688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.612 [2024-12-07 22:50:25.917747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b29bb0 is same with the state(6) to be set 00:19:21.612 [2024-12-07 22:50:25.917777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.917787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.917804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116728 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.917817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.917841] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.917851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117120 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.917864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.917886] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.917895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117128 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.917919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.917943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.917953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117136 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.917965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.917978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.917987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.917997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117144 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918022] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117152 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117160 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117168 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117176 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918211] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117184 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117192 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117200 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117208 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918398] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117216 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117224 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117232 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117240 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117248 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117256 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117264 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117272 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117280 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117288 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117296 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.918908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.612 [2024-12-07 22:50:25.918918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.612 [2024-12-07 22:50:25.918937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:117304 len:8 PRP1 0x0 PRP2 0x0 00:19:21.612 [2024-12-07 22:50:25.918970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.919017] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b29bb0 was disconnected and freed. reset controller. 00:19:21.612 [2024-12-07 22:50:25.919036] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:21.612 [2024-12-07 22:50:25.919091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.612 [2024-12-07 22:50:25.919114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.919131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.612 [2024-12-07 22:50:25.919145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.919160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.612 [2024-12-07 22:50:25.919175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.612 [2024-12-07 22:50:25.919190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.612 [2024-12-07 22:50:25.919204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:25.919218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.613 [2024-12-07 22:50:25.919283] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b05f10 (9): Bad file descriptor 00:19:21.613 [2024-12-07 22:50:25.922823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.613 [2024-12-07 22:50:25.954421] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.613 9513.60 IOPS, 37.16 MiB/s [2024-12-07T22:50:36.379Z] 9651.33 IOPS, 37.70 MiB/s [2024-12-07T22:50:36.379Z] 9739.43 IOPS, 38.04 MiB/s [2024-12-07T22:50:36.379Z] 9812.00 IOPS, 38.33 MiB/s [2024-12-07T22:50:36.379Z] 9857.78 IOPS, 38.51 MiB/s [2024-12-07T22:50:36.379Z] [2024-12-07 22:50:30.492818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.492901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.492929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.492944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.492959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.492973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.492987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.493623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.493955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.493985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.613 [2024-12-07 22:50:30.494428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.494979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.494994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.495011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.495027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.495044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.495060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.495077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.495092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.495117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.613 [2024-12-07 22:50:30.495134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.613 [2024-12-07 22:50:30.495151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.495166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.495199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.495231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.495263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.495294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.495885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.495914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.495929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.495958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.496469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.496969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.496985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:21.614 [2024-12-07 22:50:30.497013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.497046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.497077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.497108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.497139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.497171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.497202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:21.614 [2024-12-07 22:50:30.497233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:21.614 [2024-12-07 22:50:30.497299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:21.614 [2024-12-07 22:50:30.497327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105512 len:8 PRP1 0x0 PRP2 0x0 00:19:21.614 [2024-12-07 22:50:30.497341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497416] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b29870 was disconnected and freed. reset controller. 00:19:21.614 [2024-12-07 22:50:30.497444] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:21.614 [2024-12-07 22:50:30.497495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.614 [2024-12-07 22:50:30.497516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.614 [2024-12-07 22:50:30.497545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.614 [2024-12-07 22:50:30.497573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:21.614 [2024-12-07 22:50:30.497601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:21.614 [2024-12-07 22:50:30.497632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:21.614 [2024-12-07 22:50:30.497695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b05f10 (9): Bad file descriptor 00:19:21.614 [2024-12-07 22:50:30.501687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:21.614 [2024-12-07 22:50:30.540444] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:21.614 9853.50 IOPS, 38.49 MiB/s [2024-12-07T22:50:36.380Z] 9910.45 IOPS, 38.71 MiB/s [2024-12-07T22:50:36.380Z] 9947.42 IOPS, 38.86 MiB/s [2024-12-07T22:50:36.380Z] 9985.15 IOPS, 39.00 MiB/s [2024-12-07T22:50:36.380Z] 10013.07 IOPS, 39.11 MiB/s [2024-12-07T22:50:36.380Z] 10034.07 IOPS, 39.20 MiB/s 00:19:21.614 Latency(us) 00:19:21.614 [2024-12-07T22:50:36.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.614 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:21.614 Verification LBA range: start 0x0 length 0x4000 00:19:21.614 NVMe0n1 : 15.01 10036.29 39.20 226.86 0.00 12443.46 547.37 15847.80 00:19:21.614 [2024-12-07T22:50:36.380Z] =================================================================================================================== 00:19:21.614 [2024-12-07T22:50:36.380Z] Total : 10036.29 39.20 226.86 0.00 12443.46 547.37 15847.80 00:19:21.614 Received shutdown signal, test time was about 15.000000 seconds 00:19:21.614 00:19:21.614 Latency(us) 00:19:21.614 [2024-12-07T22:50:36.380Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:21.614 [2024-12-07T22:50:36.380Z] =================================================================================================================== 00:19:21.614 [2024-12-07T22:50:36.380Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:21.614 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:21.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:21.614 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:21.614 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:21.614 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89799 00:19:21.614 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89799 /var/tmp/bdevperf.sock 00:19:21.614 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89799 ']' 00:19:21.615 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:21.615 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:21.615 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:21.615 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:21.615 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:21.615 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:21.872 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:21.872 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:21.872 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:22.131 [2024-12-07 22:50:36.857030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:22.131 22:50:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:22.390 [2024-12-07 22:50:37.101232] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:22.390 22:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:22.959 NVMe0n1 00:19:22.959 22:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:23.218 00:19:23.218 22:50:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:23.476 00:19:23.476 22:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:23.476 22:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:23.735 22:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:24.010 22:50:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:27.299 22:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:27.299 22:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:27.299 22:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89874 00:19:27.299 22:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.299 22:50:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89874 00:19:28.692 { 00:19:28.692 "results": [ 00:19:28.692 { 00:19:28.692 "job": "NVMe0n1", 00:19:28.692 "core_mask": "0x1", 00:19:28.692 "workload": "verify", 00:19:28.692 "status": "finished", 00:19:28.692 "verify_range": { 00:19:28.692 "start": 0, 00:19:28.692 "length": 16384 00:19:28.692 }, 00:19:28.692 "queue_depth": 128, 00:19:28.692 "io_size": 4096, 00:19:28.692 "runtime": 1.008395, 00:19:28.692 "iops": 8226.934881668394, 00:19:28.692 "mibps": 32.136464381517165, 00:19:28.692 "io_failed": 0, 00:19:28.692 "io_timeout": 0, 00:19:28.692 "avg_latency_us": 15475.625507144736, 00:19:28.692 "min_latency_us": 904.8436363636364, 00:19:28.692 "max_latency_us": 14298.763636363636 00:19:28.692 } 00:19:28.692 ], 00:19:28.692 "core_count": 1 00:19:28.692 } 00:19:28.692 22:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:28.692 [2024-12-07 22:50:36.344284] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:28.692 [2024-12-07 22:50:36.345063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89799 ] 00:19:28.692 [2024-12-07 22:50:36.483653] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.692 [2024-12-07 22:50:36.516530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.693 [2024-12-07 22:50:36.543830] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.693 [2024-12-07 22:50:38.650415] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:28.693 [2024-12-07 22:50:38.650534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.693 [2024-12-07 22:50:38.650557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.693 [2024-12-07 22:50:38.650573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.693 [2024-12-07 22:50:38.650586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.693 [2024-12-07 22:50:38.650599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.693 [2024-12-07 22:50:38.650611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.693 [2024-12-07 22:50:38.650624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.693 [2024-12-07 22:50:38.650635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.693 [2024-12-07 22:50:38.650647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.693 [2024-12-07 22:50:38.650693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.693 [2024-12-07 22:50:38.650720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffdf10 (9): Bad file descriptor 00:19:28.693 [2024-12-07 22:50:38.654994] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:28.693 Running I/O for 1 seconds... 00:19:28.693 8160.00 IOPS, 31.88 MiB/s 00:19:28.693 Latency(us) 00:19:28.693 [2024-12-07T22:50:43.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.693 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:28.693 Verification LBA range: start 0x0 length 0x4000 00:19:28.693 NVMe0n1 : 1.01 8226.93 32.14 0.00 0.00 15475.63 904.84 14298.76 00:19:28.693 [2024-12-07T22:50:43.459Z] =================================================================================================================== 00:19:28.693 [2024-12-07T22:50:43.459Z] Total : 8226.93 32.14 0.00 0.00 15475.63 904.84 14298.76 00:19:28.693 22:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.693 22:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:28.693 22:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:28.952 22:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:28.952 22:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:29.212 22:50:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.471 22:50:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89799 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89799 ']' 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89799 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.753 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89799 00:19:33.011 killing process with pid 89799 00:19:33.011 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.011 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.011 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89799' 00:19:33.011 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89799 00:19:33.011 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89799 00:19:33.011 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:33.011 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.270 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:33.270 rmmod nvme_tcp 00:19:33.270 rmmod nvme_fabrics 00:19:33.270 rmmod nvme_keyring 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 89558 ']' 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 89558 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89558 ']' 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89558 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.271 22:50:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89558 00:19:33.271 killing process with pid 89558 00:19:33.271 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:33.271 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:33.271 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89558' 00:19:33.271 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89558 00:19:33.271 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89558 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:33.529 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:33.788 00:19:33.788 real 0m31.089s 00:19:33.788 user 2m0.447s 00:19:33.788 sys 0m5.141s 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:33.788 ************************************ 00:19:33.788 END TEST nvmf_failover 00:19:33.788 ************************************ 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:33.788 ************************************ 00:19:33.788 START TEST nvmf_host_discovery 00:19:33.788 ************************************ 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:33.788 * Looking for test storage... 00:19:33.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:19:33.788 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.048 --rc genhtml_branch_coverage=1 00:19:34.048 --rc genhtml_function_coverage=1 00:19:34.048 --rc genhtml_legend=1 00:19:34.048 --rc geninfo_all_blocks=1 00:19:34.048 --rc geninfo_unexecuted_blocks=1 00:19:34.048 00:19:34.048 ' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.048 --rc genhtml_branch_coverage=1 00:19:34.048 --rc genhtml_function_coverage=1 00:19:34.048 --rc genhtml_legend=1 00:19:34.048 --rc geninfo_all_blocks=1 00:19:34.048 --rc geninfo_unexecuted_blocks=1 00:19:34.048 00:19:34.048 ' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.048 --rc genhtml_branch_coverage=1 00:19:34.048 --rc genhtml_function_coverage=1 00:19:34.048 --rc genhtml_legend=1 00:19:34.048 --rc geninfo_all_blocks=1 00:19:34.048 --rc geninfo_unexecuted_blocks=1 00:19:34.048 00:19:34.048 ' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:34.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.048 --rc genhtml_branch_coverage=1 00:19:34.048 --rc genhtml_function_coverage=1 00:19:34.048 --rc genhtml_legend=1 00:19:34.048 --rc geninfo_all_blocks=1 00:19:34.048 --rc geninfo_unexecuted_blocks=1 00:19:34.048 00:19:34.048 ' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.048 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.049 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:34.049 Cannot find device "nvmf_init_br" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:34.049 Cannot find device "nvmf_init_br2" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:34.049 Cannot find device "nvmf_tgt_br" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:34.049 Cannot find device "nvmf_tgt_br2" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:34.049 Cannot find device "nvmf_init_br" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:34.049 Cannot find device "nvmf_init_br2" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:34.049 Cannot find device "nvmf_tgt_br" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:34.049 Cannot find device "nvmf_tgt_br2" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:34.049 Cannot find device "nvmf_br" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:34.049 Cannot find device "nvmf_init_if" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:34.049 Cannot find device "nvmf_init_if2" 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:34.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:34.049 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:34.049 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:34.308 22:50:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:34.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:34.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:19:34.308 00:19:34.308 --- 10.0.0.3 ping statistics --- 00:19:34.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.308 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:34.308 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:34.308 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:19:34.308 00:19:34.308 --- 10.0.0.4 ping statistics --- 00:19:34.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.308 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:34.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:34.308 00:19:34.308 --- 10.0.0.1 ping statistics --- 00:19:34.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.308 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:34.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:19:34.308 00:19:34.308 --- 10.0.0.2 ping statistics --- 00:19:34.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.308 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:34.308 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=90193 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 90193 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90193 ']' 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.309 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.567 [2024-12-07 22:50:49.119971] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:34.567 [2024-12-07 22:50:49.120058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.567 [2024-12-07 22:50:49.258808] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.567 [2024-12-07 22:50:49.289818] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.567 [2024-12-07 22:50:49.289903] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.567 [2024-12-07 22:50:49.289914] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.567 [2024-12-07 22:50:49.289922] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.567 [2024-12-07 22:50:49.289927] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.567 [2024-12-07 22:50:49.289956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.567 [2024-12-07 22:50:49.315714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 [2024-12-07 22:50:49.422464] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 [2024-12-07 22:50:49.430570] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 null0 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 null1 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90212 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90212 /tmp/host.sock 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90212 ']' 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.826 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.826 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 [2024-12-07 22:50:49.521936] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:34.826 [2024-12-07 22:50:49.522044] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90212 ] 00:19:35.085 [2024-12-07 22:50:49.660454] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.085 [2024-12-07 22:50:49.701353] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.085 [2024-12-07 22:50:49.733935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.085 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.344 22:50:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.344 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.603 [2024-12-07 22:50:50.138674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:35.603 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.862 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:35.862 22:50:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:36.122 [2024-12-07 22:50:50.805321] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:36.122 [2024-12-07 22:50:50.805365] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:36.122 [2024-12-07 22:50:50.805385] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:36.122 [2024-12-07 22:50:50.811366] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:36.122 [2024-12-07 22:50:50.867849] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:36.122 [2024-12-07 22:50:50.867899] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.689 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:36.948 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.949 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:36.949 [2024-12-07 22:50:51.712102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:37.208 [2024-12-07 22:50:51.712583] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:37.208 [2024-12-07 22:50:51.712623] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:37.208 [2024-12-07 22:50:51.718575] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.208 [2024-12-07 22:50:51.778917] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:37.208 [2024-12-07 22:50:51.778942] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:37.208 [2024-12-07 22:50:51.778949] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.208 [2024-12-07 22:50:51.936807] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:37.208 [2024-12-07 22:50:51.936849] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.208 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:37.208 [2024-12-07 22:50:51.942854] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:37.208 [2024-12-07 22:50:51.942919] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:37.208 [2024-12-07 22:50:51.943012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.208 [2024-12-07 22:50:51.943043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.208 [2024-12-07 22:50:51.943058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.208 [2024-12-07 22:50:51.943068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.208 [2024-12-07 22:50:51.943079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.208 [2024-12-07 22:50:51.943088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.209 [2024-12-07 22:50:51.943098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:37.209 [2024-12-07 22:50:51.943109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:37.209 [2024-12-07 22:50:51.943119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b3480 is same with the state(6) to be set 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:37.209 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.467 22:50:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:37.467 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:37.468 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.726 22:50:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.663 [2024-12-07 22:50:53.366247] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:38.663 [2024-12-07 22:50:53.366273] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:38.663 [2024-12-07 22:50:53.366292] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:38.663 [2024-12-07 22:50:53.372331] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:38.923 [2024-12-07 22:50:53.432989] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:38.923 [2024-12-07 22:50:53.433044] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.923 request: 00:19:38.923 { 00:19:38.923 "name": "nvme", 00:19:38.923 "trtype": "tcp", 00:19:38.923 "traddr": "10.0.0.3", 00:19:38.923 "adrfam": "ipv4", 00:19:38.923 "trsvcid": "8009", 00:19:38.923 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:38.923 "wait_for_attach": true, 00:19:38.923 "method": "bdev_nvme_start_discovery", 00:19:38.923 "req_id": 1 00:19:38.923 } 00:19:38.923 Got JSON-RPC error response 00:19:38.923 response: 00:19:38.923 { 00:19:38.923 "code": -17, 00:19:38.923 "message": "File exists" 00:19:38.923 } 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.923 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.923 request: 00:19:38.923 { 00:19:38.923 "name": "nvme_second", 00:19:38.923 "trtype": "tcp", 00:19:38.923 "traddr": "10.0.0.3", 00:19:38.923 "adrfam": "ipv4", 00:19:38.923 "trsvcid": "8009", 00:19:38.923 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:38.924 "wait_for_attach": true, 00:19:38.924 "method": "bdev_nvme_start_discovery", 00:19:38.924 "req_id": 1 00:19:38.924 } 00:19:38.924 Got JSON-RPC error response 00:19:38.924 response: 00:19:38.924 { 00:19:38.924 "code": -17, 00:19:38.924 "message": "File exists" 00:19:38.924 } 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.924 22:50:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.409 [2024-12-07 22:50:54.677492] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.409 [2024-12-07 22:50:54.677553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e4bc0 with addr=10.0.0.3, port=8010 00:19:40.409 [2024-12-07 22:50:54.677571] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:40.409 [2024-12-07 22:50:54.677580] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:40.409 [2024-12-07 22:50:54.677589] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:40.974 [2024-12-07 22:50:55.677458] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:40.974 [2024-12-07 22:50:55.677528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20e4bc0 with addr=10.0.0.3, port=8010 00:19:40.974 [2024-12-07 22:50:55.677543] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:40.974 [2024-12-07 22:50:55.677551] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:40.974 [2024-12-07 22:50:55.677559] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:42.350 [2024-12-07 22:50:56.677390] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:42.350 request: 00:19:42.350 { 00:19:42.350 "name": "nvme_second", 00:19:42.350 "trtype": "tcp", 00:19:42.350 "traddr": "10.0.0.3", 00:19:42.350 "adrfam": "ipv4", 00:19:42.350 "trsvcid": "8010", 00:19:42.350 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:42.350 "wait_for_attach": false, 00:19:42.350 "attach_timeout_ms": 3000, 00:19:42.350 "method": "bdev_nvme_start_discovery", 00:19:42.350 "req_id": 1 00:19:42.350 } 00:19:42.350 Got JSON-RPC error response 00:19:42.350 response: 00:19:42.350 { 00:19:42.350 "code": -110, 00:19:42.350 "message": "Connection timed out" 00:19:42.350 } 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90212 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.350 rmmod nvme_tcp 00:19:42.350 rmmod nvme_fabrics 00:19:42.350 rmmod nvme_keyring 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 90193 ']' 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 90193 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 90193 ']' 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 90193 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90193 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:42.350 killing process with pid 90193 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90193' 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 90193 00:19:42.350 22:50:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 90193 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:42.350 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:42.351 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:42.351 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:42.351 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:42.608 00:19:42.608 real 0m8.811s 00:19:42.608 user 0m16.874s 00:19:42.608 sys 0m1.765s 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:42.608 ************************************ 00:19:42.608 END TEST nvmf_host_discovery 00:19:42.608 ************************************ 00:19:42.608 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.609 22:50:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:42.609 22:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:42.609 22:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:42.609 22:50:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:42.609 ************************************ 00:19:42.609 START TEST nvmf_host_multipath_status 00:19:42.609 ************************************ 00:19:42.609 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:42.868 * Looking for test storage... 00:19:42.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.868 --rc genhtml_branch_coverage=1 00:19:42.868 --rc genhtml_function_coverage=1 00:19:42.868 --rc genhtml_legend=1 00:19:42.868 --rc geninfo_all_blocks=1 00:19:42.868 --rc geninfo_unexecuted_blocks=1 00:19:42.868 00:19:42.868 ' 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.868 --rc genhtml_branch_coverage=1 00:19:42.868 --rc genhtml_function_coverage=1 00:19:42.868 --rc genhtml_legend=1 00:19:42.868 --rc geninfo_all_blocks=1 00:19:42.868 --rc geninfo_unexecuted_blocks=1 00:19:42.868 00:19:42.868 ' 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.868 --rc genhtml_branch_coverage=1 00:19:42.868 --rc genhtml_function_coverage=1 00:19:42.868 --rc genhtml_legend=1 00:19:42.868 --rc geninfo_all_blocks=1 00:19:42.868 --rc geninfo_unexecuted_blocks=1 00:19:42.868 00:19:42.868 ' 00:19:42.868 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:42.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.868 --rc genhtml_branch_coverage=1 00:19:42.868 --rc genhtml_function_coverage=1 00:19:42.869 --rc genhtml_legend=1 00:19:42.869 --rc geninfo_all_blocks=1 00:19:42.869 --rc geninfo_unexecuted_blocks=1 00:19:42.869 00:19:42.869 ' 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:42.869 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:42.869 Cannot find device "nvmf_init_br" 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:42.869 Cannot find device "nvmf_init_br2" 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:42.869 Cannot find device "nvmf_tgt_br" 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.869 Cannot find device "nvmf_tgt_br2" 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:42.869 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:42.869 Cannot find device "nvmf_init_br" 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:42.870 Cannot find device "nvmf_init_br2" 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:42.870 Cannot find device "nvmf_tgt_br" 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:42.870 Cannot find device "nvmf_tgt_br2" 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:42.870 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:42.870 Cannot find device "nvmf_br" 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:43.128 Cannot find device "nvmf_init_if" 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:43.128 Cannot find device "nvmf_init_if2" 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.128 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.128 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.128 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:43.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:19:43.386 00:19:43.386 --- 10.0.0.3 ping statistics --- 00:19:43.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.386 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:43.386 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:43.386 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:19:43.386 00:19:43.386 --- 10.0.0.4 ping statistics --- 00:19:43.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.386 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:19:43.386 00:19:43.386 --- 10.0.0.1 ping statistics --- 00:19:43.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.386 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:43.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:19:43.386 00:19:43.386 --- 10.0.0.2 ping statistics --- 00:19:43.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.386 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:43.386 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=90708 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 90708 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90708 ']' 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.387 22:50:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:43.387 [2024-12-07 22:50:58.018752] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:19:43.387 [2024-12-07 22:50:58.018855] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.646 [2024-12-07 22:50:58.154440] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:43.646 [2024-12-07 22:50:58.198320] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.646 [2024-12-07 22:50:58.198384] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.646 [2024-12-07 22:50:58.198400] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.646 [2024-12-07 22:50:58.198410] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.646 [2024-12-07 22:50:58.198419] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.646 [2024-12-07 22:50:58.199142] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.646 [2024-12-07 22:50:58.199159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.646 [2024-12-07 22:50:58.235617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90708 00:19:43.646 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:43.905 [2024-12-07 22:50:58.611065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.905 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:44.164 Malloc0 00:19:44.424 22:50:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:44.424 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.682 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:44.941 [2024-12-07 22:50:59.630333] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:44.941 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:45.200 [2024-12-07 22:50:59.914403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90756 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90756 /var/tmp/bdevperf.sock 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90756 ']' 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.200 22:50:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:46.574 22:51:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.574 22:51:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:46.574 22:51:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:46.574 22:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:46.832 Nvme0n1 00:19:46.832 22:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:47.090 Nvme0n1 00:19:47.090 22:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:47.090 22:51:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:49.623 22:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:49.623 22:51:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:49.623 22:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:49.623 22:51:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:50.998 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:51.257 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:51.257 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:51.257 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.257 22:51:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:51.514 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.514 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:51.514 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.514 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:51.771 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:51.771 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:51.771 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:51.771 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:52.038 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.038 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:52.038 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:52.038 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:52.297 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:52.297 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:52.297 22:51:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:52.555 22:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:52.813 22:51:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:53.747 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:53.747 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:53.747 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:53.747 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:54.006 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:54.006 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:54.006 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.006 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:54.264 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.265 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:54.265 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.265 22:51:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:54.524 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.524 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:54.524 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.524 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:54.783 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:54.783 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:54.783 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:54.783 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:55.042 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.042 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:55.042 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:55.042 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:55.300 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:55.300 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:55.300 22:51:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:55.559 22:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:19:55.817 22:51:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:56.753 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:56.753 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:56.753 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.753 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:57.011 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.011 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:57.011 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.011 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:57.269 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:57.269 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:57.269 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.269 22:51:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:57.528 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.528 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:57.528 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.528 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:57.787 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.787 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:57.787 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.787 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:58.045 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.045 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:58.045 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:58.045 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.304 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.304 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:58.304 22:51:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:58.563 22:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:19:58.822 22:51:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:59.758 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:59.758 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:59.758 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.758 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:00.016 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.016 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:00.016 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:00.016 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.275 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.275 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:00.275 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.275 22:51:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:00.532 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.532 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:00.532 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.532 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:00.790 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.790 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:00.790 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.790 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:01.048 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.048 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:01.048 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.048 22:51:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:01.614 22:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:01.614 22:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:01.614 22:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:01.614 22:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:01.872 22:51:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.249 22:51:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:03.508 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.508 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:03.508 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.508 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:03.766 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.766 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:03.766 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.766 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:04.025 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.025 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:04.025 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.025 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:04.283 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:04.283 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:04.283 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.283 22:51:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:04.542 22:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:04.542 22:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:04.542 22:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:04.801 22:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:05.060 22:51:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:05.997 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:05.997 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:05.997 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:05.997 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:06.255 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:06.255 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:06.255 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.255 22:51:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:06.516 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.516 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:06.516 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.516 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:06.791 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.791 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:06.791 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.791 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:07.063 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.063 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:07.063 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.063 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:07.320 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.320 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:07.320 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:07.320 22:51:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.578 22:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.578 22:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:07.836 22:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:07.836 22:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:08.094 22:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:08.352 22:51:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:09.286 22:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:09.286 22:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:09.286 22:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.286 22:51:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:09.544 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.544 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:09.544 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.544 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:09.802 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.803 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:09.803 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:09.803 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.061 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.061 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:10.061 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.061 22:51:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:10.320 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.320 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:10.320 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:10.320 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.576 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.576 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:10.576 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.576 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:10.834 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.834 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:10.834 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:11.092 22:51:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:11.356 22:51:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:12.289 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:12.289 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:12.289 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.290 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:12.546 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:12.546 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:12.546 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.546 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:12.804 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.804 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:12.804 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.804 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:13.061 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.061 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:13.061 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.061 22:51:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:13.319 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.320 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:13.320 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:13.320 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.579 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.579 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:13.579 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.579 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:13.838 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.838 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:13.838 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:14.096 22:51:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:14.355 22:51:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:15.291 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:15.291 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:15.292 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.292 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.859 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:16.117 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.117 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:16.117 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.117 22:51:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:16.376 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.376 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:16.376 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.376 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:16.635 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.635 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:16.635 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:16.635 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.894 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.894 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:16.894 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:17.153 22:51:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:17.412 22:51:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:18.788 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.046 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:19.046 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.046 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.046 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.304 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.304 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:19.304 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:19.304 22:51:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.563 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.563 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:19.563 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.563 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:19.821 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.821 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:19.821 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.821 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90756 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90756 ']' 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90756 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90756 00:20:20.080 killing process with pid 90756 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90756' 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90756 00:20:20.080 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90756 00:20:20.080 { 00:20:20.080 "results": [ 00:20:20.080 { 00:20:20.080 "job": "Nvme0n1", 00:20:20.080 "core_mask": "0x4", 00:20:20.080 "workload": "verify", 00:20:20.080 "status": "terminated", 00:20:20.080 "verify_range": { 00:20:20.080 "start": 0, 00:20:20.080 "length": 16384 00:20:20.080 }, 00:20:20.080 "queue_depth": 128, 00:20:20.080 "io_size": 4096, 00:20:20.081 "runtime": 32.803119, 00:20:20.081 "iops": 9461.996586361192, 00:20:20.081 "mibps": 36.96092416547341, 00:20:20.081 "io_failed": 0, 00:20:20.081 "io_timeout": 0, 00:20:20.081 "avg_latency_us": 13500.370102228537, 00:20:20.081 "min_latency_us": 696.32, 00:20:20.081 "max_latency_us": 4026531.84 00:20:20.081 } 00:20:20.081 ], 00:20:20.081 "core_count": 1 00:20:20.081 } 00:20:20.343 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90756 00:20:20.343 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:20.343 [2024-12-07 22:50:59.987500] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:20.343 [2024-12-07 22:50:59.987600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90756 ] 00:20:20.343 [2024-12-07 22:51:00.122327] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.344 [2024-12-07 22:51:00.156313] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.344 [2024-12-07 22:51:00.184169] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:20.344 [2024-12-07 22:51:01.775688] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:20:20.344 Running I/O for 90 seconds... 00:20:20.344 7573.00 IOPS, 29.58 MiB/s [2024-12-07T22:51:35.110Z] 7690.50 IOPS, 30.04 MiB/s [2024-12-07T22:51:35.110Z] 7772.33 IOPS, 30.36 MiB/s [2024-12-07T22:51:35.110Z] 7813.00 IOPS, 30.52 MiB/s [2024-12-07T22:51:35.110Z] 7786.60 IOPS, 30.42 MiB/s [2024-12-07T22:51:35.110Z] 8182.83 IOPS, 31.96 MiB/s [2024-12-07T22:51:35.110Z] 8515.86 IOPS, 33.27 MiB/s [2024-12-07T22:51:35.110Z] 8763.38 IOPS, 34.23 MiB/s [2024-12-07T22:51:35.110Z] 8977.89 IOPS, 35.07 MiB/s [2024-12-07T22:51:35.110Z] 9136.10 IOPS, 35.69 MiB/s [2024-12-07T22:51:35.110Z] 9251.00 IOPS, 36.14 MiB/s [2024-12-07T22:51:35.110Z] 9366.75 IOPS, 36.59 MiB/s [2024-12-07T22:51:35.110Z] 9462.08 IOPS, 36.96 MiB/s [2024-12-07T22:51:35.110Z] 9528.93 IOPS, 37.22 MiB/s [2024-12-07T22:51:35.110Z] [2024-12-07 22:51:16.348769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.348826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.348905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:54064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.348926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.348948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:54072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.348962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.348980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:54080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.348993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:54096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:53544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:53552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:53560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:53568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:53576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:53584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.349393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:54120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:54128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:54136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:54144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:54152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:54160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:54168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:54176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:54184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.349973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:54192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.349987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:54200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.350021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.350055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:54216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.350090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:54224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.350124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:54232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.350158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:54240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.344 [2024-12-07 22:51:16.350192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.344 [2024-12-07 22:51:16.350235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:20.344 [2024-12-07 22:51:16.350257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:53616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:53624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:53632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:53648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:53664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:53672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:53680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:53688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:53704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:53720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:53728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.350760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.350827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:54256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.350863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.350908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:54272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.350943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.350979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.350993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:54296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:54320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:54328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:54344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:54360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.345 [2024-12-07 22:51:16.351401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:53736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.345 [2024-12-07 22:51:16.351664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:20.345 [2024-12-07 22:51:16.351684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.351933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.351975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:54376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.351991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:54384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:54400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:54408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:54432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:54440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:54448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:54456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:54472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:54488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:54496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.346 [2024-12-07 22:51:16.352525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.352969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.352983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.353003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.353019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:20.346 [2024-12-07 22:51:16.353040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.346 [2024-12-07 22:51:16.353054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:54504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:54512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:54520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:54552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:16.353384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:16.353631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:16.353644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:20.347 9184.33 IOPS, 35.88 MiB/s [2024-12-07T22:51:35.113Z] 8610.31 IOPS, 33.63 MiB/s [2024-12-07T22:51:35.113Z] 8103.82 IOPS, 31.66 MiB/s [2024-12-07T22:51:35.113Z] 7653.61 IOPS, 29.90 MiB/s [2024-12-07T22:51:35.113Z] 7577.79 IOPS, 29.60 MiB/s [2024-12-07T22:51:35.113Z] 7720.00 IOPS, 30.16 MiB/s [2024-12-07T22:51:35.113Z] 7889.33 IOPS, 30.82 MiB/s [2024-12-07T22:51:35.113Z] 8191.55 IOPS, 32.00 MiB/s [2024-12-07T22:51:35.113Z] 8444.26 IOPS, 32.99 MiB/s [2024-12-07T22:51:35.113Z] 8646.50 IOPS, 33.78 MiB/s [2024-12-07T22:51:35.113Z] 8733.28 IOPS, 34.11 MiB/s [2024-12-07T22:51:35.113Z] 8793.69 IOPS, 34.35 MiB/s [2024-12-07T22:51:35.113Z] 8854.37 IOPS, 34.59 MiB/s [2024-12-07T22:51:35.113Z] 9043.18 IOPS, 35.32 MiB/s [2024-12-07T22:51:35.113Z] 9203.86 IOPS, 35.95 MiB/s [2024-12-07T22:51:35.113Z] 9350.57 IOPS, 36.53 MiB/s [2024-12-07T22:51:35.113Z] [2024-12-07 22:51:32.129003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:32.129375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:32.129669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.347 [2024-12-07 22:51:32.129702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:20.347 [2024-12-07 22:51:32.129720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.347 [2024-12-07 22:51:32.129733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.129752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.129765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.129783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.129797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.129815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.129828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.129847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.129860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.129878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.129891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.129925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.129957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.129976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.129998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.130436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.130615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.130628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.132272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.132300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.132326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.132342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.132362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.348 [2024-12-07 22:51:32.132391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.132411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.132424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.132444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.132457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.132476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.132489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:20.348 [2024-12-07 22:51:32.132521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.348 [2024-12-07 22:51:32.132536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:37256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.349 [2024-12-07 22:51:32.132569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.349 [2024-12-07 22:51:32.132602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.349 [2024-12-07 22:51:32.132634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.132980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.132994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:37304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.349 [2024-12-07 22:51:32.133026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.133057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.133088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.133120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:20.349 [2024-12-07 22:51:32.133151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.349 [2024-12-07 22:51:32.133182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:37336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.349 [2024-12-07 22:51:32.133214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:20.349 [2024-12-07 22:51:32.133248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:20.349 [2024-12-07 22:51:32.133266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:20.349 9411.06 IOPS, 36.76 MiB/s [2024-12-07T22:51:35.115Z] 9443.47 IOPS, 36.89 MiB/s [2024-12-07T22:51:35.115Z] Received shutdown signal, test time was about 32.803866 seconds 00:20:20.349 00:20:20.349 Latency(us) 00:20:20.349 [2024-12-07T22:51:35.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.349 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:20.349 Verification LBA range: start 0x0 length 0x4000 00:20:20.349 Nvme0n1 : 32.80 9462.00 36.96 0.00 0.00 13500.37 696.32 4026531.84 00:20:20.349 [2024-12-07T22:51:35.115Z] =================================================================================================================== 00:20:20.349 [2024-12-07T22:51:35.115Z] Total : 9462.00 36.96 0.00 0.00 13500.37 696.32 4026531.84 00:20:20.349 [2024-12-07 22:51:34.731109] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:20:20.349 22:51:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.349 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:20.349 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:20.349 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:20.349 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:20.349 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:20.608 rmmod nvme_tcp 00:20:20.608 rmmod nvme_fabrics 00:20:20.608 rmmod nvme_keyring 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 90708 ']' 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 90708 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90708 ']' 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90708 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90708 00:20:20.608 killing process with pid 90708 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90708' 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90708 00:20:20.608 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90708 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.867 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:21.126 ************************************ 00:20:21.126 END TEST nvmf_host_multipath_status 00:20:21.126 ************************************ 00:20:21.126 00:20:21.126 real 0m38.314s 00:20:21.126 user 2m3.858s 00:20:21.126 sys 0m11.074s 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:21.126 ************************************ 00:20:21.126 START TEST nvmf_discovery_remove_ifc 00:20:21.126 ************************************ 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:21.126 * Looking for test storage... 00:20:21.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.126 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:21.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.127 --rc genhtml_branch_coverage=1 00:20:21.127 --rc genhtml_function_coverage=1 00:20:21.127 --rc genhtml_legend=1 00:20:21.127 --rc geninfo_all_blocks=1 00:20:21.127 --rc geninfo_unexecuted_blocks=1 00:20:21.127 00:20:21.127 ' 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:21.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.127 --rc genhtml_branch_coverage=1 00:20:21.127 --rc genhtml_function_coverage=1 00:20:21.127 --rc genhtml_legend=1 00:20:21.127 --rc geninfo_all_blocks=1 00:20:21.127 --rc geninfo_unexecuted_blocks=1 00:20:21.127 00:20:21.127 ' 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:21.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.127 --rc genhtml_branch_coverage=1 00:20:21.127 --rc genhtml_function_coverage=1 00:20:21.127 --rc genhtml_legend=1 00:20:21.127 --rc geninfo_all_blocks=1 00:20:21.127 --rc geninfo_unexecuted_blocks=1 00:20:21.127 00:20:21.127 ' 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:21.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.127 --rc genhtml_branch_coverage=1 00:20:21.127 --rc genhtml_function_coverage=1 00:20:21.127 --rc genhtml_legend=1 00:20:21.127 --rc geninfo_all_blocks=1 00:20:21.127 --rc geninfo_unexecuted_blocks=1 00:20:21.127 00:20:21.127 ' 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.127 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.386 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.387 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:21.387 Cannot find device "nvmf_init_br" 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:21.387 Cannot find device "nvmf_init_br2" 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:21.387 Cannot find device "nvmf_tgt_br" 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.387 Cannot find device "nvmf_tgt_br2" 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:21.387 Cannot find device "nvmf_init_br" 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:21.387 Cannot find device "nvmf_init_br2" 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:21.387 Cannot find device "nvmf_tgt_br" 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:21.387 22:51:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:21.387 Cannot find device "nvmf_tgt_br2" 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:21.387 Cannot find device "nvmf_br" 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:21.387 Cannot find device "nvmf_init_if" 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:21.387 Cannot find device "nvmf_init_if2" 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:21.387 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:21.646 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.646 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:21.646 00:20:21.646 --- 10.0.0.3 ping statistics --- 00:20:21.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.646 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:21.646 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:21.646 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:20:21.646 00:20:21.646 --- 10.0.0.4 ping statistics --- 00:20:21.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.646 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:21.646 00:20:21.646 --- 10.0.0.1 ping statistics --- 00:20:21.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.646 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:21.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:20:21.646 00:20:21.646 --- 10.0.0.2 ping statistics --- 00:20:21.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.646 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=91586 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 91586 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91586 ']' 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.646 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.647 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.647 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.647 [2024-12-07 22:51:36.356662] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:21.647 [2024-12-07 22:51:36.357284] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.906 [2024-12-07 22:51:36.494933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.906 [2024-12-07 22:51:36.526483] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.906 [2024-12-07 22:51:36.526549] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.906 [2024-12-07 22:51:36.526574] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.906 [2024-12-07 22:51:36.526580] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.906 [2024-12-07 22:51:36.526586] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.906 [2024-12-07 22:51:36.526611] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.906 [2024-12-07 22:51:36.552903] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.906 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.165 [2024-12-07 22:51:36.671353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.165 [2024-12-07 22:51:36.679425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:22.165 null0 00:20:22.165 [2024-12-07 22:51:36.711314] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91611 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91611 /tmp/host.sock 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91611 ']' 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:22.165 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:22.165 22:51:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.165 [2024-12-07 22:51:36.789375] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:20:22.165 [2024-12-07 22:51:36.789485] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91611 ] 00:20:22.424 [2024-12-07 22:51:36.930012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.424 [2024-12-07 22:51:36.972653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:22.424 [2024-12-07 22:51:37.090117] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.424 22:51:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:23.361 [2024-12-07 22:51:38.124213] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:23.361 [2024-12-07 22:51:38.124275] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:23.361 [2024-12-07 22:51:38.124308] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:23.621 [2024-12-07 22:51:38.130253] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:23.621 [2024-12-07 22:51:38.186594] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:23.621 [2024-12-07 22:51:38.186665] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:23.621 [2024-12-07 22:51:38.186690] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:23.621 [2024-12-07 22:51:38.186704] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:23.621 [2024-12-07 22:51:38.186750] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:23.621 [2024-12-07 22:51:38.192777] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc216f0 was disconnected and freed. delete nvme_qpair. 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:23.621 22:51:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:24.557 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:24.557 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:24.557 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:24.557 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.557 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:24.557 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:24.557 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:24.816 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.816 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:24.816 22:51:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:25.751 22:51:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:26.686 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:26.686 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:26.686 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:26.686 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.686 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:26.686 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:26.686 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:26.945 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.945 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:26.945 22:51:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:27.880 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:27.880 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:27.880 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:27.880 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.881 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:27.881 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.881 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:27.881 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.881 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:27.881 22:51:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:28.814 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:28.814 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:28.814 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:28.814 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.814 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.814 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:28.814 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.072 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.072 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:29.072 22:51:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:29.072 [2024-12-07 22:51:43.615093] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:29.072 [2024-12-07 22:51:43.615154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.072 [2024-12-07 22:51:43.615169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.072 [2024-12-07 22:51:43.615180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.072 [2024-12-07 22:51:43.615203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.072 [2024-12-07 22:51:43.615227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.072 [2024-12-07 22:51:43.615234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.072 [2024-12-07 22:51:43.615242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.072 [2024-12-07 22:51:43.615250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.072 [2024-12-07 22:51:43.615258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:29.072 [2024-12-07 22:51:43.615265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:29.072 [2024-12-07 22:51:43.615273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfcc40 is same with the state(6) to be set 00:20:29.072 [2024-12-07 22:51:43.625087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfcc40 (9): Bad file descriptor 00:20:29.072 [2024-12-07 22:51:43.635115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.007 [2024-12-07 22:51:44.678958] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:30.007 [2024-12-07 22:51:44.679046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbfcc40 with addr=10.0.0.3, port=4420 00:20:30.007 [2024-12-07 22:51:44.679077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbfcc40 is same with the state(6) to be set 00:20:30.007 [2024-12-07 22:51:44.679138] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbfcc40 (9): Bad file descriptor 00:20:30.007 [2024-12-07 22:51:44.679957] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:30.007 [2024-12-07 22:51:44.680030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:30.007 [2024-12-07 22:51:44.680052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:30.007 [2024-12-07 22:51:44.680072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:30.007 [2024-12-07 22:51:44.680119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.007 [2024-12-07 22:51:44.680139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:30.007 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.008 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:30.008 22:51:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:30.941 [2024-12-07 22:51:45.680183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:30.941 [2024-12-07 22:51:45.680235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:30.941 [2024-12-07 22:51:45.680261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:30.941 [2024-12-07 22:51:45.680269] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:30.941 [2024-12-07 22:51:45.680295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:30.941 [2024-12-07 22:51:45.680321] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:30.941 [2024-12-07 22:51:45.680354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.941 [2024-12-07 22:51:45.680367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.941 [2024-12-07 22:51:45.680380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.941 [2024-12-07 22:51:45.680388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.941 [2024-12-07 22:51:45.680396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.941 [2024-12-07 22:51:45.680404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.941 [2024-12-07 22:51:45.680413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.941 [2024-12-07 22:51:45.680421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.941 [2024-12-07 22:51:45.680429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.941 [2024-12-07 22:51:45.680437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.941 [2024-12-07 22:51:45.680445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:30.941 [2024-12-07 22:51:45.681186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeb180 (9): Bad file descriptor 00:20:30.941 [2024-12-07 22:51:45.682197] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:30.941 [2024-12-07 22:51:45.682235] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:31.207 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:31.208 22:51:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:32.190 22:51:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:33.125 [2024-12-07 22:51:47.687987] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:33.125 [2024-12-07 22:51:47.688011] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:33.125 [2024-12-07 22:51:47.688043] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:33.125 [2024-12-07 22:51:47.694019] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:33.125 [2024-12-07 22:51:47.749910] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:33.125 [2024-12-07 22:51:47.749979] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:33.125 [2024-12-07 22:51:47.750002] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:33.125 [2024-12-07 22:51:47.750016] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:33.125 [2024-12-07 22:51:47.750024] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:33.125 [2024-12-07 22:51:47.756501] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xc30af0 was disconnected and freed. delete nvme_qpair. 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91611 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91611 ']' 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91611 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91611 00:20:33.384 killing process with pid 91611 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91611' 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91611 00:20:33.384 22:51:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91611 00:20:33.384 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:33.384 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:33.384 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:33.644 rmmod nvme_tcp 00:20:33.644 rmmod nvme_fabrics 00:20:33.644 rmmod nvme_keyring 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 91586 ']' 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 91586 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91586 ']' 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91586 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91586 00:20:33.644 killing process with pid 91586 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91586' 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91586 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91586 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:33.644 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:33.903 00:20:33.903 real 0m12.899s 00:20:33.903 user 0m22.158s 00:20:33.903 sys 0m2.343s 00:20:33.903 ************************************ 00:20:33.903 END TEST nvmf_discovery_remove_ifc 00:20:33.903 ************************************ 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.903 ************************************ 00:20:33.903 START TEST nvmf_identify_kernel_target 00:20:33.903 ************************************ 00:20:33.903 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:34.163 * Looking for test storage... 00:20:34.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:34.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.163 --rc genhtml_branch_coverage=1 00:20:34.163 --rc genhtml_function_coverage=1 00:20:34.163 --rc genhtml_legend=1 00:20:34.163 --rc geninfo_all_blocks=1 00:20:34.163 --rc geninfo_unexecuted_blocks=1 00:20:34.163 00:20:34.163 ' 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:34.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.163 --rc genhtml_branch_coverage=1 00:20:34.163 --rc genhtml_function_coverage=1 00:20:34.163 --rc genhtml_legend=1 00:20:34.163 --rc geninfo_all_blocks=1 00:20:34.163 --rc geninfo_unexecuted_blocks=1 00:20:34.163 00:20:34.163 ' 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:34.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.163 --rc genhtml_branch_coverage=1 00:20:34.163 --rc genhtml_function_coverage=1 00:20:34.163 --rc genhtml_legend=1 00:20:34.163 --rc geninfo_all_blocks=1 00:20:34.163 --rc geninfo_unexecuted_blocks=1 00:20:34.163 00:20:34.163 ' 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:34.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:34.163 --rc genhtml_branch_coverage=1 00:20:34.163 --rc genhtml_function_coverage=1 00:20:34.163 --rc genhtml_legend=1 00:20:34.163 --rc geninfo_all_blocks=1 00:20:34.163 --rc geninfo_unexecuted_blocks=1 00:20:34.163 00:20:34.163 ' 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.163 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:34.164 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:34.164 Cannot find device "nvmf_init_br" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:34.164 Cannot find device "nvmf_init_br2" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:34.164 Cannot find device "nvmf_tgt_br" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:34.164 Cannot find device "nvmf_tgt_br2" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:34.164 Cannot find device "nvmf_init_br" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:34.164 Cannot find device "nvmf_init_br2" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:34.164 Cannot find device "nvmf_tgt_br" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:34.164 Cannot find device "nvmf_tgt_br2" 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:34.164 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:34.424 Cannot find device "nvmf_br" 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:34.424 Cannot find device "nvmf_init_if" 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:34.424 Cannot find device "nvmf_init_if2" 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.424 22:51:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:34.424 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:34.683 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.683 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:20:34.683 00:20:34.683 --- 10.0.0.3 ping statistics --- 00:20:34.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.683 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:34.683 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:34.683 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:20:34.683 00:20:34.683 --- 10.0.0.4 ping statistics --- 00:20:34.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.683 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:34.683 00:20:34.683 --- 10.0.0.1 ping statistics --- 00:20:34.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.683 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:34.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:20:34.683 00:20:34.683 --- 10.0.0.2 ping statistics --- 00:20:34.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.683 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:34.683 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:34.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:34.942 Waiting for block devices as requested 00:20:35.202 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:35.202 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:35.202 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:35.202 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:35.202 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:35.202 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:35.202 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:35.202 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:35.203 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:35.203 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:35.203 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:35.203 No valid GPT data, bailing 00:20:35.203 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:35.462 22:51:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:35.462 No valid GPT data, bailing 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:35.462 No valid GPT data, bailing 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:35.462 No valid GPT data, bailing 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:20:35.462 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:35.722 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -a 10.0.0.1 -t tcp -s 4420 00:20:35.722 00:20:35.722 Discovery Log Number of Records 2, Generation counter 2 00:20:35.722 =====Discovery Log Entry 0====== 00:20:35.722 trtype: tcp 00:20:35.722 adrfam: ipv4 00:20:35.722 subtype: current discovery subsystem 00:20:35.722 treq: not specified, sq flow control disable supported 00:20:35.722 portid: 1 00:20:35.722 trsvcid: 4420 00:20:35.722 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:35.722 traddr: 10.0.0.1 00:20:35.722 eflags: none 00:20:35.722 sectype: none 00:20:35.722 =====Discovery Log Entry 1====== 00:20:35.722 trtype: tcp 00:20:35.722 adrfam: ipv4 00:20:35.722 subtype: nvme subsystem 00:20:35.722 treq: not specified, sq flow control disable supported 00:20:35.722 portid: 1 00:20:35.722 trsvcid: 4420 00:20:35.723 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:35.723 traddr: 10.0.0.1 00:20:35.723 eflags: none 00:20:35.723 sectype: none 00:20:35.723 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:35.723 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:35.723 ===================================================== 00:20:35.723 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:35.723 ===================================================== 00:20:35.723 Controller Capabilities/Features 00:20:35.723 ================================ 00:20:35.723 Vendor ID: 0000 00:20:35.723 Subsystem Vendor ID: 0000 00:20:35.723 Serial Number: 9c0d0b50385eccbc9e67 00:20:35.723 Model Number: Linux 00:20:35.723 Firmware Version: 6.8.9-20 00:20:35.723 Recommended Arb Burst: 0 00:20:35.723 IEEE OUI Identifier: 00 00 00 00:20:35.723 Multi-path I/O 00:20:35.723 May have multiple subsystem ports: No 00:20:35.723 May have multiple controllers: No 00:20:35.723 Associated with SR-IOV VF: No 00:20:35.723 Max Data Transfer Size: Unlimited 00:20:35.723 Max Number of Namespaces: 0 00:20:35.723 Max Number of I/O Queues: 1024 00:20:35.723 NVMe Specification Version (VS): 1.3 00:20:35.723 NVMe Specification Version (Identify): 1.3 00:20:35.723 Maximum Queue Entries: 1024 00:20:35.723 Contiguous Queues Required: No 00:20:35.723 Arbitration Mechanisms Supported 00:20:35.723 Weighted Round Robin: Not Supported 00:20:35.723 Vendor Specific: Not Supported 00:20:35.723 Reset Timeout: 7500 ms 00:20:35.723 Doorbell Stride: 4 bytes 00:20:35.723 NVM Subsystem Reset: Not Supported 00:20:35.723 Command Sets Supported 00:20:35.723 NVM Command Set: Supported 00:20:35.723 Boot Partition: Not Supported 00:20:35.723 Memory Page Size Minimum: 4096 bytes 00:20:35.723 Memory Page Size Maximum: 4096 bytes 00:20:35.723 Persistent Memory Region: Not Supported 00:20:35.723 Optional Asynchronous Events Supported 00:20:35.723 Namespace Attribute Notices: Not Supported 00:20:35.723 Firmware Activation Notices: Not Supported 00:20:35.723 ANA Change Notices: Not Supported 00:20:35.723 PLE Aggregate Log Change Notices: Not Supported 00:20:35.723 LBA Status Info Alert Notices: Not Supported 00:20:35.723 EGE Aggregate Log Change Notices: Not Supported 00:20:35.723 Normal NVM Subsystem Shutdown event: Not Supported 00:20:35.723 Zone Descriptor Change Notices: Not Supported 00:20:35.723 Discovery Log Change Notices: Supported 00:20:35.723 Controller Attributes 00:20:35.723 128-bit Host Identifier: Not Supported 00:20:35.723 Non-Operational Permissive Mode: Not Supported 00:20:35.723 NVM Sets: Not Supported 00:20:35.723 Read Recovery Levels: Not Supported 00:20:35.723 Endurance Groups: Not Supported 00:20:35.723 Predictable Latency Mode: Not Supported 00:20:35.723 Traffic Based Keep ALive: Not Supported 00:20:35.723 Namespace Granularity: Not Supported 00:20:35.723 SQ Associations: Not Supported 00:20:35.723 UUID List: Not Supported 00:20:35.723 Multi-Domain Subsystem: Not Supported 00:20:35.723 Fixed Capacity Management: Not Supported 00:20:35.723 Variable Capacity Management: Not Supported 00:20:35.723 Delete Endurance Group: Not Supported 00:20:35.723 Delete NVM Set: Not Supported 00:20:35.723 Extended LBA Formats Supported: Not Supported 00:20:35.723 Flexible Data Placement Supported: Not Supported 00:20:35.723 00:20:35.723 Controller Memory Buffer Support 00:20:35.723 ================================ 00:20:35.723 Supported: No 00:20:35.723 00:20:35.723 Persistent Memory Region Support 00:20:35.723 ================================ 00:20:35.723 Supported: No 00:20:35.723 00:20:35.723 Admin Command Set Attributes 00:20:35.723 ============================ 00:20:35.723 Security Send/Receive: Not Supported 00:20:35.723 Format NVM: Not Supported 00:20:35.723 Firmware Activate/Download: Not Supported 00:20:35.723 Namespace Management: Not Supported 00:20:35.723 Device Self-Test: Not Supported 00:20:35.723 Directives: Not Supported 00:20:35.723 NVMe-MI: Not Supported 00:20:35.723 Virtualization Management: Not Supported 00:20:35.723 Doorbell Buffer Config: Not Supported 00:20:35.723 Get LBA Status Capability: Not Supported 00:20:35.723 Command & Feature Lockdown Capability: Not Supported 00:20:35.723 Abort Command Limit: 1 00:20:35.723 Async Event Request Limit: 1 00:20:35.723 Number of Firmware Slots: N/A 00:20:35.723 Firmware Slot 1 Read-Only: N/A 00:20:35.723 Firmware Activation Without Reset: N/A 00:20:35.723 Multiple Update Detection Support: N/A 00:20:35.723 Firmware Update Granularity: No Information Provided 00:20:35.723 Per-Namespace SMART Log: No 00:20:35.723 Asymmetric Namespace Access Log Page: Not Supported 00:20:35.723 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:35.723 Command Effects Log Page: Not Supported 00:20:35.723 Get Log Page Extended Data: Supported 00:20:35.723 Telemetry Log Pages: Not Supported 00:20:35.723 Persistent Event Log Pages: Not Supported 00:20:35.723 Supported Log Pages Log Page: May Support 00:20:35.723 Commands Supported & Effects Log Page: Not Supported 00:20:35.723 Feature Identifiers & Effects Log Page:May Support 00:20:35.723 NVMe-MI Commands & Effects Log Page: May Support 00:20:35.723 Data Area 4 for Telemetry Log: Not Supported 00:20:35.723 Error Log Page Entries Supported: 1 00:20:35.723 Keep Alive: Not Supported 00:20:35.723 00:20:35.723 NVM Command Set Attributes 00:20:35.723 ========================== 00:20:35.723 Submission Queue Entry Size 00:20:35.723 Max: 1 00:20:35.723 Min: 1 00:20:35.723 Completion Queue Entry Size 00:20:35.723 Max: 1 00:20:35.723 Min: 1 00:20:35.723 Number of Namespaces: 0 00:20:35.723 Compare Command: Not Supported 00:20:35.723 Write Uncorrectable Command: Not Supported 00:20:35.723 Dataset Management Command: Not Supported 00:20:35.723 Write Zeroes Command: Not Supported 00:20:35.723 Set Features Save Field: Not Supported 00:20:35.723 Reservations: Not Supported 00:20:35.723 Timestamp: Not Supported 00:20:35.723 Copy: Not Supported 00:20:35.723 Volatile Write Cache: Not Present 00:20:35.723 Atomic Write Unit (Normal): 1 00:20:35.723 Atomic Write Unit (PFail): 1 00:20:35.723 Atomic Compare & Write Unit: 1 00:20:35.723 Fused Compare & Write: Not Supported 00:20:35.723 Scatter-Gather List 00:20:35.723 SGL Command Set: Supported 00:20:35.723 SGL Keyed: Not Supported 00:20:35.723 SGL Bit Bucket Descriptor: Not Supported 00:20:35.723 SGL Metadata Pointer: Not Supported 00:20:35.723 Oversized SGL: Not Supported 00:20:35.723 SGL Metadata Address: Not Supported 00:20:35.723 SGL Offset: Supported 00:20:35.723 Transport SGL Data Block: Not Supported 00:20:35.723 Replay Protected Memory Block: Not Supported 00:20:35.723 00:20:35.723 Firmware Slot Information 00:20:35.723 ========================= 00:20:35.723 Active slot: 0 00:20:35.723 00:20:35.723 00:20:35.723 Error Log 00:20:35.723 ========= 00:20:35.723 00:20:35.723 Active Namespaces 00:20:35.723 ================= 00:20:35.723 Discovery Log Page 00:20:35.723 ================== 00:20:35.723 Generation Counter: 2 00:20:35.723 Number of Records: 2 00:20:35.723 Record Format: 0 00:20:35.723 00:20:35.723 Discovery Log Entry 0 00:20:35.723 ---------------------- 00:20:35.723 Transport Type: 3 (TCP) 00:20:35.723 Address Family: 1 (IPv4) 00:20:35.723 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:35.723 Entry Flags: 00:20:35.723 Duplicate Returned Information: 0 00:20:35.723 Explicit Persistent Connection Support for Discovery: 0 00:20:35.723 Transport Requirements: 00:20:35.723 Secure Channel: Not Specified 00:20:35.723 Port ID: 1 (0x0001) 00:20:35.723 Controller ID: 65535 (0xffff) 00:20:35.723 Admin Max SQ Size: 32 00:20:35.723 Transport Service Identifier: 4420 00:20:35.723 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:35.723 Transport Address: 10.0.0.1 00:20:35.723 Discovery Log Entry 1 00:20:35.723 ---------------------- 00:20:35.723 Transport Type: 3 (TCP) 00:20:35.723 Address Family: 1 (IPv4) 00:20:35.723 Subsystem Type: 2 (NVM Subsystem) 00:20:35.723 Entry Flags: 00:20:35.723 Duplicate Returned Information: 0 00:20:35.723 Explicit Persistent Connection Support for Discovery: 0 00:20:35.723 Transport Requirements: 00:20:35.723 Secure Channel: Not Specified 00:20:35.723 Port ID: 1 (0x0001) 00:20:35.723 Controller ID: 65535 (0xffff) 00:20:35.723 Admin Max SQ Size: 32 00:20:35.723 Transport Service Identifier: 4420 00:20:35.723 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:35.724 Transport Address: 10.0.0.1 00:20:35.724 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:35.984 get_feature(0x01) failed 00:20:35.984 get_feature(0x02) failed 00:20:35.984 get_feature(0x04) failed 00:20:35.984 ===================================================== 00:20:35.984 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:35.984 ===================================================== 00:20:35.984 Controller Capabilities/Features 00:20:35.984 ================================ 00:20:35.984 Vendor ID: 0000 00:20:35.984 Subsystem Vendor ID: 0000 00:20:35.984 Serial Number: aa56754d3f004ec1e72c 00:20:35.984 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:35.984 Firmware Version: 6.8.9-20 00:20:35.984 Recommended Arb Burst: 6 00:20:35.984 IEEE OUI Identifier: 00 00 00 00:20:35.984 Multi-path I/O 00:20:35.984 May have multiple subsystem ports: Yes 00:20:35.984 May have multiple controllers: Yes 00:20:35.984 Associated with SR-IOV VF: No 00:20:35.984 Max Data Transfer Size: Unlimited 00:20:35.984 Max Number of Namespaces: 1024 00:20:35.984 Max Number of I/O Queues: 128 00:20:35.984 NVMe Specification Version (VS): 1.3 00:20:35.984 NVMe Specification Version (Identify): 1.3 00:20:35.984 Maximum Queue Entries: 1024 00:20:35.984 Contiguous Queues Required: No 00:20:35.984 Arbitration Mechanisms Supported 00:20:35.984 Weighted Round Robin: Not Supported 00:20:35.984 Vendor Specific: Not Supported 00:20:35.984 Reset Timeout: 7500 ms 00:20:35.984 Doorbell Stride: 4 bytes 00:20:35.984 NVM Subsystem Reset: Not Supported 00:20:35.984 Command Sets Supported 00:20:35.984 NVM Command Set: Supported 00:20:35.984 Boot Partition: Not Supported 00:20:35.984 Memory Page Size Minimum: 4096 bytes 00:20:35.984 Memory Page Size Maximum: 4096 bytes 00:20:35.984 Persistent Memory Region: Not Supported 00:20:35.984 Optional Asynchronous Events Supported 00:20:35.984 Namespace Attribute Notices: Supported 00:20:35.984 Firmware Activation Notices: Not Supported 00:20:35.984 ANA Change Notices: Supported 00:20:35.984 PLE Aggregate Log Change Notices: Not Supported 00:20:35.984 LBA Status Info Alert Notices: Not Supported 00:20:35.984 EGE Aggregate Log Change Notices: Not Supported 00:20:35.984 Normal NVM Subsystem Shutdown event: Not Supported 00:20:35.984 Zone Descriptor Change Notices: Not Supported 00:20:35.984 Discovery Log Change Notices: Not Supported 00:20:35.984 Controller Attributes 00:20:35.984 128-bit Host Identifier: Supported 00:20:35.984 Non-Operational Permissive Mode: Not Supported 00:20:35.984 NVM Sets: Not Supported 00:20:35.984 Read Recovery Levels: Not Supported 00:20:35.984 Endurance Groups: Not Supported 00:20:35.984 Predictable Latency Mode: Not Supported 00:20:35.984 Traffic Based Keep ALive: Supported 00:20:35.984 Namespace Granularity: Not Supported 00:20:35.984 SQ Associations: Not Supported 00:20:35.984 UUID List: Not Supported 00:20:35.984 Multi-Domain Subsystem: Not Supported 00:20:35.984 Fixed Capacity Management: Not Supported 00:20:35.984 Variable Capacity Management: Not Supported 00:20:35.984 Delete Endurance Group: Not Supported 00:20:35.984 Delete NVM Set: Not Supported 00:20:35.984 Extended LBA Formats Supported: Not Supported 00:20:35.984 Flexible Data Placement Supported: Not Supported 00:20:35.984 00:20:35.984 Controller Memory Buffer Support 00:20:35.984 ================================ 00:20:35.984 Supported: No 00:20:35.984 00:20:35.984 Persistent Memory Region Support 00:20:35.984 ================================ 00:20:35.984 Supported: No 00:20:35.984 00:20:35.984 Admin Command Set Attributes 00:20:35.984 ============================ 00:20:35.984 Security Send/Receive: Not Supported 00:20:35.984 Format NVM: Not Supported 00:20:35.984 Firmware Activate/Download: Not Supported 00:20:35.984 Namespace Management: Not Supported 00:20:35.984 Device Self-Test: Not Supported 00:20:35.984 Directives: Not Supported 00:20:35.984 NVMe-MI: Not Supported 00:20:35.984 Virtualization Management: Not Supported 00:20:35.984 Doorbell Buffer Config: Not Supported 00:20:35.984 Get LBA Status Capability: Not Supported 00:20:35.984 Command & Feature Lockdown Capability: Not Supported 00:20:35.984 Abort Command Limit: 4 00:20:35.984 Async Event Request Limit: 4 00:20:35.984 Number of Firmware Slots: N/A 00:20:35.984 Firmware Slot 1 Read-Only: N/A 00:20:35.984 Firmware Activation Without Reset: N/A 00:20:35.984 Multiple Update Detection Support: N/A 00:20:35.984 Firmware Update Granularity: No Information Provided 00:20:35.984 Per-Namespace SMART Log: Yes 00:20:35.984 Asymmetric Namespace Access Log Page: Supported 00:20:35.984 ANA Transition Time : 10 sec 00:20:35.984 00:20:35.984 Asymmetric Namespace Access Capabilities 00:20:35.984 ANA Optimized State : Supported 00:20:35.984 ANA Non-Optimized State : Supported 00:20:35.984 ANA Inaccessible State : Supported 00:20:35.984 ANA Persistent Loss State : Supported 00:20:35.984 ANA Change State : Supported 00:20:35.984 ANAGRPID is not changed : No 00:20:35.984 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:35.984 00:20:35.984 ANA Group Identifier Maximum : 128 00:20:35.984 Number of ANA Group Identifiers : 128 00:20:35.984 Max Number of Allowed Namespaces : 1024 00:20:35.984 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:35.984 Command Effects Log Page: Supported 00:20:35.984 Get Log Page Extended Data: Supported 00:20:35.984 Telemetry Log Pages: Not Supported 00:20:35.984 Persistent Event Log Pages: Not Supported 00:20:35.984 Supported Log Pages Log Page: May Support 00:20:35.984 Commands Supported & Effects Log Page: Not Supported 00:20:35.984 Feature Identifiers & Effects Log Page:May Support 00:20:35.984 NVMe-MI Commands & Effects Log Page: May Support 00:20:35.984 Data Area 4 for Telemetry Log: Not Supported 00:20:35.984 Error Log Page Entries Supported: 128 00:20:35.984 Keep Alive: Supported 00:20:35.984 Keep Alive Granularity: 1000 ms 00:20:35.984 00:20:35.984 NVM Command Set Attributes 00:20:35.984 ========================== 00:20:35.984 Submission Queue Entry Size 00:20:35.984 Max: 64 00:20:35.984 Min: 64 00:20:35.984 Completion Queue Entry Size 00:20:35.984 Max: 16 00:20:35.984 Min: 16 00:20:35.984 Number of Namespaces: 1024 00:20:35.984 Compare Command: Not Supported 00:20:35.984 Write Uncorrectable Command: Not Supported 00:20:35.984 Dataset Management Command: Supported 00:20:35.984 Write Zeroes Command: Supported 00:20:35.984 Set Features Save Field: Not Supported 00:20:35.984 Reservations: Not Supported 00:20:35.984 Timestamp: Not Supported 00:20:35.984 Copy: Not Supported 00:20:35.984 Volatile Write Cache: Present 00:20:35.984 Atomic Write Unit (Normal): 1 00:20:35.984 Atomic Write Unit (PFail): 1 00:20:35.984 Atomic Compare & Write Unit: 1 00:20:35.984 Fused Compare & Write: Not Supported 00:20:35.984 Scatter-Gather List 00:20:35.984 SGL Command Set: Supported 00:20:35.984 SGL Keyed: Not Supported 00:20:35.984 SGL Bit Bucket Descriptor: Not Supported 00:20:35.984 SGL Metadata Pointer: Not Supported 00:20:35.984 Oversized SGL: Not Supported 00:20:35.984 SGL Metadata Address: Not Supported 00:20:35.984 SGL Offset: Supported 00:20:35.984 Transport SGL Data Block: Not Supported 00:20:35.984 Replay Protected Memory Block: Not Supported 00:20:35.984 00:20:35.984 Firmware Slot Information 00:20:35.984 ========================= 00:20:35.984 Active slot: 0 00:20:35.984 00:20:35.984 Asymmetric Namespace Access 00:20:35.984 =========================== 00:20:35.984 Change Count : 0 00:20:35.984 Number of ANA Group Descriptors : 1 00:20:35.984 ANA Group Descriptor : 0 00:20:35.984 ANA Group ID : 1 00:20:35.984 Number of NSID Values : 1 00:20:35.984 Change Count : 0 00:20:35.984 ANA State : 1 00:20:35.984 Namespace Identifier : 1 00:20:35.984 00:20:35.984 Commands Supported and Effects 00:20:35.984 ============================== 00:20:35.984 Admin Commands 00:20:35.984 -------------- 00:20:35.984 Get Log Page (02h): Supported 00:20:35.984 Identify (06h): Supported 00:20:35.984 Abort (08h): Supported 00:20:35.984 Set Features (09h): Supported 00:20:35.984 Get Features (0Ah): Supported 00:20:35.984 Asynchronous Event Request (0Ch): Supported 00:20:35.984 Keep Alive (18h): Supported 00:20:35.984 I/O Commands 00:20:35.984 ------------ 00:20:35.985 Flush (00h): Supported 00:20:35.985 Write (01h): Supported LBA-Change 00:20:35.985 Read (02h): Supported 00:20:35.985 Write Zeroes (08h): Supported LBA-Change 00:20:35.985 Dataset Management (09h): Supported 00:20:35.985 00:20:35.985 Error Log 00:20:35.985 ========= 00:20:35.985 Entry: 0 00:20:35.985 Error Count: 0x3 00:20:35.985 Submission Queue Id: 0x0 00:20:35.985 Command Id: 0x5 00:20:35.985 Phase Bit: 0 00:20:35.985 Status Code: 0x2 00:20:35.985 Status Code Type: 0x0 00:20:35.985 Do Not Retry: 1 00:20:35.985 Error Location: 0x28 00:20:35.985 LBA: 0x0 00:20:35.985 Namespace: 0x0 00:20:35.985 Vendor Log Page: 0x0 00:20:35.985 ----------- 00:20:35.985 Entry: 1 00:20:35.985 Error Count: 0x2 00:20:35.985 Submission Queue Id: 0x0 00:20:35.985 Command Id: 0x5 00:20:35.985 Phase Bit: 0 00:20:35.985 Status Code: 0x2 00:20:35.985 Status Code Type: 0x0 00:20:35.985 Do Not Retry: 1 00:20:35.985 Error Location: 0x28 00:20:35.985 LBA: 0x0 00:20:35.985 Namespace: 0x0 00:20:35.985 Vendor Log Page: 0x0 00:20:35.985 ----------- 00:20:35.985 Entry: 2 00:20:35.985 Error Count: 0x1 00:20:35.985 Submission Queue Id: 0x0 00:20:35.985 Command Id: 0x4 00:20:35.985 Phase Bit: 0 00:20:35.985 Status Code: 0x2 00:20:35.985 Status Code Type: 0x0 00:20:35.985 Do Not Retry: 1 00:20:35.985 Error Location: 0x28 00:20:35.985 LBA: 0x0 00:20:35.985 Namespace: 0x0 00:20:35.985 Vendor Log Page: 0x0 00:20:35.985 00:20:35.985 Number of Queues 00:20:35.985 ================ 00:20:35.985 Number of I/O Submission Queues: 128 00:20:35.985 Number of I/O Completion Queues: 128 00:20:35.985 00:20:35.985 ZNS Specific Controller Data 00:20:35.985 ============================ 00:20:35.985 Zone Append Size Limit: 0 00:20:35.985 00:20:35.985 00:20:35.985 Active Namespaces 00:20:35.985 ================= 00:20:35.985 get_feature(0x05) failed 00:20:35.985 Namespace ID:1 00:20:35.985 Command Set Identifier: NVM (00h) 00:20:35.985 Deallocate: Supported 00:20:35.985 Deallocated/Unwritten Error: Not Supported 00:20:35.985 Deallocated Read Value: Unknown 00:20:35.985 Deallocate in Write Zeroes: Not Supported 00:20:35.985 Deallocated Guard Field: 0xFFFF 00:20:35.985 Flush: Supported 00:20:35.985 Reservation: Not Supported 00:20:35.985 Namespace Sharing Capabilities: Multiple Controllers 00:20:35.985 Size (in LBAs): 1310720 (5GiB) 00:20:35.985 Capacity (in LBAs): 1310720 (5GiB) 00:20:35.985 Utilization (in LBAs): 1310720 (5GiB) 00:20:35.985 UUID: 481817e6-bc61-4f34-a434-e019f460cbe7 00:20:35.985 Thin Provisioning: Not Supported 00:20:35.985 Per-NS Atomic Units: Yes 00:20:35.985 Atomic Boundary Size (Normal): 0 00:20:35.985 Atomic Boundary Size (PFail): 0 00:20:35.985 Atomic Boundary Offset: 0 00:20:35.985 NGUID/EUI64 Never Reused: No 00:20:35.985 ANA group ID: 1 00:20:35.985 Namespace Write Protected: No 00:20:35.985 Number of LBA Formats: 1 00:20:35.985 Current LBA Format: LBA Format #00 00:20:35.985 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:35.985 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:35.985 rmmod nvme_tcp 00:20:35.985 rmmod nvme_fabrics 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:35.985 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.245 22:51:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.245 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:36.245 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:36.245 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:36.245 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:20:36.504 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:36.504 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:36.504 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:36.504 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:36.504 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:20:36.504 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:20:36.504 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:37.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.333 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.333 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.333 ************************************ 00:20:37.333 END TEST nvmf_identify_kernel_target 00:20:37.333 ************************************ 00:20:37.333 00:20:37.333 real 0m3.329s 00:20:37.333 user 0m1.131s 00:20:37.333 sys 0m1.458s 00:20:37.333 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:37.333 22:51:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.333 22:51:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:37.333 22:51:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:37.333 22:51:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:37.333 22:51:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.333 ************************************ 00:20:37.333 START TEST nvmf_auth_host 00:20:37.333 ************************************ 00:20:37.333 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:37.593 * Looking for test storage... 00:20:37.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:37.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.593 --rc genhtml_branch_coverage=1 00:20:37.593 --rc genhtml_function_coverage=1 00:20:37.593 --rc genhtml_legend=1 00:20:37.593 --rc geninfo_all_blocks=1 00:20:37.593 --rc geninfo_unexecuted_blocks=1 00:20:37.593 00:20:37.593 ' 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:37.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.593 --rc genhtml_branch_coverage=1 00:20:37.593 --rc genhtml_function_coverage=1 00:20:37.593 --rc genhtml_legend=1 00:20:37.593 --rc geninfo_all_blocks=1 00:20:37.593 --rc geninfo_unexecuted_blocks=1 00:20:37.593 00:20:37.593 ' 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:37.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.593 --rc genhtml_branch_coverage=1 00:20:37.593 --rc genhtml_function_coverage=1 00:20:37.593 --rc genhtml_legend=1 00:20:37.593 --rc geninfo_all_blocks=1 00:20:37.593 --rc geninfo_unexecuted_blocks=1 00:20:37.593 00:20:37.593 ' 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:37.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:37.593 --rc genhtml_branch_coverage=1 00:20:37.593 --rc genhtml_function_coverage=1 00:20:37.593 --rc genhtml_legend=1 00:20:37.593 --rc geninfo_all_blocks=1 00:20:37.593 --rc geninfo_unexecuted_blocks=1 00:20:37.593 00:20:37.593 ' 00:20:37.593 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:37.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:37.594 Cannot find device "nvmf_init_br" 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:37.594 Cannot find device "nvmf_init_br2" 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:37.594 Cannot find device "nvmf_tgt_br" 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:37.594 Cannot find device "nvmf_tgt_br2" 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:37.594 Cannot find device "nvmf_init_br" 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:37.594 Cannot find device "nvmf_init_br2" 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:37.594 Cannot find device "nvmf_tgt_br" 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:37.594 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:37.595 Cannot find device "nvmf_tgt_br2" 00:20:37.595 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:37.595 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:37.595 Cannot find device "nvmf_br" 00:20:37.595 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:37.595 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:37.854 Cannot find device "nvmf_init_if" 00:20:37.854 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:37.854 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:37.854 Cannot find device "nvmf_init_if2" 00:20:37.854 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:37.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:37.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:37.855 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:38.115 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:38.115 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:20:38.115 00:20:38.115 --- 10.0.0.3 ping statistics --- 00:20:38.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.115 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:38.115 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:38.115 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:20:38.115 00:20:38.115 --- 10.0.0.4 ping statistics --- 00:20:38.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.115 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:38.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:38.115 00:20:38.115 --- 10.0.0.1 ping statistics --- 00:20:38.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.115 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:38.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:20:38.115 00:20:38.115 --- 10.0.0.2 ping statistics --- 00:20:38.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.115 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=92596 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 92596 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92596 ']' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.115 22:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c72293a53500e1e67ab1213a5252e8e7 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.kju 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c72293a53500e1e67ab1213a5252e8e7 0 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c72293a53500e1e67ab1213a5252e8e7 0 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c72293a53500e1e67ab1213a5252e8e7 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.kju 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.kju 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.kju 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:38.375 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=2af542eee47ef7b764d8a507e7567417e30f92ebb6b23214a1af48948773771c 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.Rt1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 2af542eee47ef7b764d8a507e7567417e30f92ebb6b23214a1af48948773771c 3 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 2af542eee47ef7b764d8a507e7567417e30f92ebb6b23214a1af48948773771c 3 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=2af542eee47ef7b764d8a507e7567417e30f92ebb6b23214a1af48948773771c 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.Rt1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.Rt1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Rt1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=efc6a4edebf060f1c0637a52b9a6a70db4e972d77c1cf086 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.O74 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key efc6a4edebf060f1c0637a52b9a6a70db4e972d77c1cf086 0 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 efc6a4edebf060f1c0637a52b9a6a70db4e972d77c1cf086 0 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=efc6a4edebf060f1c0637a52b9a6a70db4e972d77c1cf086 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.O74 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.O74 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.O74 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=8fcee6bd5a5aac8024d9231e2a57e249dd38a8b0b367ce7d 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.VQi 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 8fcee6bd5a5aac8024d9231e2a57e249dd38a8b0b367ce7d 2 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 8fcee6bd5a5aac8024d9231e2a57e249dd38a8b0b367ce7d 2 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=8fcee6bd5a5aac8024d9231e2a57e249dd38a8b0b367ce7d 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.VQi 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.VQi 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.VQi 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=4f5c2bcb614501d788cfeec75a505bf5 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.uda 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 4f5c2bcb614501d788cfeec75a505bf5 1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 4f5c2bcb614501d788cfeec75a505bf5 1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=4f5c2bcb614501d788cfeec75a505bf5 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.uda 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.uda 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.uda 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c8ce9d9d19a5802fa7d1e349d30c02b5 00:20:38.634 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.gBJ 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c8ce9d9d19a5802fa7d1e349d30c02b5 1 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c8ce9d9d19a5802fa7d1e349d30c02b5 1 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c8ce9d9d19a5802fa7d1e349d30c02b5 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.gBJ 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.gBJ 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.gBJ 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=44e7e0fc1f43a850db2eaaf78e5ad727cea59097ba8a9702 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.dYk 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 44e7e0fc1f43a850db2eaaf78e5ad727cea59097ba8a9702 2 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 44e7e0fc1f43a850db2eaaf78e5ad727cea59097ba8a9702 2 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=44e7e0fc1f43a850db2eaaf78e5ad727cea59097ba8a9702 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.dYk 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.dYk 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.dYk 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=73723556aad3c0b4fc0e15c66d92f713 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.jal 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 73723556aad3c0b4fc0e15c66d92f713 0 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 73723556aad3c0b4fc0e15c66d92f713 0 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=73723556aad3c0b4fc0e15c66d92f713 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.jal 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.jal 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.jal 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=13ff28578cae4eeab71c63fe9957443523075f36338081bffbf44e39ab5f46b7 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.nvt 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 13ff28578cae4eeab71c63fe9957443523075f36338081bffbf44e39ab5f46b7 3 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 13ff28578cae4eeab71c63fe9957443523075f36338081bffbf44e39ab5f46b7 3 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=13ff28578cae4eeab71c63fe9957443523075f36338081bffbf44e39ab5f46b7 00:20:38.893 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.nvt 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.nvt 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.nvt 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92596 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92596 ']' 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.894 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kju 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Rt1 ]] 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Rt1 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.O74 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.VQi ]] 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.VQi 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.uda 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.gBJ ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.gBJ 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.dYk 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.jal ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.jal 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.nvt 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:39.463 22:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:39.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:39.722 Waiting for block devices as requested 00:20:39.722 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:39.982 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:40.549 No valid GPT data, bailing 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:40.549 No valid GPT data, bailing 00:20:40.549 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:40.807 No valid GPT data, bailing 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:40.807 No valid GPT data, bailing 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:20:40.807 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -a 10.0.0.1 -t tcp -s 4420 00:20:40.808 00:20:40.808 Discovery Log Number of Records 2, Generation counter 2 00:20:40.808 =====Discovery Log Entry 0====== 00:20:40.808 trtype: tcp 00:20:40.808 adrfam: ipv4 00:20:40.808 subtype: current discovery subsystem 00:20:40.808 treq: not specified, sq flow control disable supported 00:20:40.808 portid: 1 00:20:40.808 trsvcid: 4420 00:20:40.808 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:40.808 traddr: 10.0.0.1 00:20:40.808 eflags: none 00:20:40.808 sectype: none 00:20:40.808 =====Discovery Log Entry 1====== 00:20:40.808 trtype: tcp 00:20:40.808 adrfam: ipv4 00:20:40.808 subtype: nvme subsystem 00:20:40.808 treq: not specified, sq flow control disable supported 00:20:40.808 portid: 1 00:20:40.808 trsvcid: 4420 00:20:40.808 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:40.808 traddr: 10.0.0.1 00:20:40.808 eflags: none 00:20:40.808 sectype: none 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:40.808 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.066 nvme0n1 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.066 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.326 nvme0n1 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.326 22:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.326 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.586 nvme0n1 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.586 nvme0n1 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.586 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.846 nvme0n1 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:41.846 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.847 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.106 nvme0n1 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.106 22:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.364 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.622 nvme0n1 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.622 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.880 nvme0n1 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.880 nvme0n1 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:42.880 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.139 nvme0n1 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.139 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.140 22:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.398 nvme0n1 00:20:43.398 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.398 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.398 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:43.398 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.398 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.398 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:43.399 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.965 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.966 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.224 nvme0n1 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.224 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.225 22:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.483 nvme0n1 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:44.483 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.484 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.742 nvme0n1 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:44.742 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.743 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:44.743 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:44.743 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:44.743 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:44.743 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:44.743 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.001 nvme0n1 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.001 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.260 nvme0n1 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:45.260 22:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.636 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.895 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.153 nvme0n1 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.153 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.154 22:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.413 nvme0n1 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.413 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.982 nvme0n1 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.982 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.241 nvme0n1 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.241 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.242 22:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.501 nvme0n1 00:20:48.501 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.501 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.501 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.501 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.501 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.501 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.760 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.761 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.329 nvme0n1 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.329 22:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.897 nvme0n1 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:49.897 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.898 22:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.468 nvme0n1 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.468 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.035 nvme0n1 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.035 22:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.601 nvme0n1 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.601 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.858 nvme0n1 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.858 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.859 nvme0n1 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.859 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.117 nvme0n1 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.117 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.377 nvme0n1 00:20:52.377 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.377 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.377 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.377 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.377 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.377 22:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.377 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.635 nvme0n1 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.636 nvme0n1 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.636 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.923 nvme0n1 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.923 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.924 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.182 nvme0n1 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.182 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 nvme0n1 00:20:53.441 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.441 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.441 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.441 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.441 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.441 22:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 nvme0n1 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.441 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.699 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.700 nvme0n1 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.700 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.959 nvme0n1 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.959 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.218 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.219 nvme0n1 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.219 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.481 22:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.481 nvme0n1 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.481 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.755 nvme0n1 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.755 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.023 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.023 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.023 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.023 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.023 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.023 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.286 nvme0n1 00:20:55.286 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.287 22:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.547 nvme0n1 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.547 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.116 nvme0n1 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.116 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.117 22:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.377 nvme0n1 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.377 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.635 nvme0n1 00:20:56.635 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.635 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.635 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.635 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.635 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.892 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.893 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.457 nvme0n1 00:20:57.457 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.457 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.457 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.457 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.457 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.457 22:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.457 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.022 nvme0n1 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:58.022 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.023 22:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.590 nvme0n1 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.590 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.157 nvme0n1 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.157 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.158 22:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.725 nvme0n1 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.725 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.726 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.985 nvme0n1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.985 nvme0n1 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.985 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.986 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.986 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.986 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.986 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.245 nvme0n1 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:00.245 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.246 22:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.506 nvme0n1 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.506 nvme0n1 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.506 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 nvme0n1 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.765 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.024 nvme0n1 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.024 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.283 nvme0n1 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.283 22:52:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.542 nvme0n1 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.542 nvme0n1 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.542 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.801 nvme0n1 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.801 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.061 nvme0n1 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.061 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.321 22:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.321 nvme0n1 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.321 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.581 nvme0n1 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.581 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.582 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.842 nvme0n1 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.842 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.412 nvme0n1 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.412 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.413 22:52:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.673 nvme0n1 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:03.673 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.674 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.933 nvme0n1 00:21:03.933 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.933 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.933 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.933 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.933 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.933 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.193 22:52:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 nvme0n1 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.453 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.712 nvme0n1 00:21:04.712 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.712 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.713 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.713 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.713 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.713 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzcyMjkzYTUzNTAwZTFlNjdhYjEyMTNhNTI1MmU4ZTe5a9vu: 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmFmNTQyZWVlNDdlZjdiNzY0ZDhhNTA3ZTc1Njc0MTdlMzBmOTJlYmI2YjIzMjE0YTFhZjQ4OTQ4NzczNzcxY9027rs=: 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.972 22:52:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.542 nvme0n1 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.542 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.112 nvme0n1 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.112 22:52:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.681 nvme0n1 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDRlN2UwZmMxZjQzYTg1MGRiMmVhYWY3OGU1YWQ3MjdjZWE1OTA5N2JhOGE5NzAy4+4Org==: 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzM3MjM1NTZhYWQzYzBiNGZjMGUxNWM2NmQ5MmY3MTPrrVq0: 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.681 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.251 nvme0n1 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTNmZjI4NTc4Y2FlNGVlYWI3MWM2M2ZlOTk1NzQ0MzUyMzA3NWYzNjMzODA4MWJmZmJmNDRlMzlhYjVmNDZiN0RsFmw=: 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.251 22:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.821 nvme0n1 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.821 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.822 request: 00:21:07.822 { 00:21:07.822 "name": "nvme0", 00:21:07.822 "trtype": "tcp", 00:21:07.822 "traddr": "10.0.0.1", 00:21:07.822 "adrfam": "ipv4", 00:21:07.822 "trsvcid": "4420", 00:21:07.822 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:07.822 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:07.822 "prchk_reftag": false, 00:21:07.822 "prchk_guard": false, 00:21:07.822 "hdgst": false, 00:21:07.822 "ddgst": false, 00:21:07.822 "allow_unrecognized_csi": false, 00:21:07.822 "method": "bdev_nvme_attach_controller", 00:21:07.822 "req_id": 1 00:21:07.822 } 00:21:07.822 Got JSON-RPC error response 00:21:07.822 response: 00:21:07.822 { 00:21:07.822 "code": -5, 00:21:07.822 "message": "Input/output error" 00:21:07.822 } 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.822 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.082 request: 00:21:08.082 { 00:21:08.082 "name": "nvme0", 00:21:08.082 "trtype": "tcp", 00:21:08.082 "traddr": "10.0.0.1", 00:21:08.082 "adrfam": "ipv4", 00:21:08.082 "trsvcid": "4420", 00:21:08.082 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:08.082 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:08.082 "prchk_reftag": false, 00:21:08.082 "prchk_guard": false, 00:21:08.082 "hdgst": false, 00:21:08.082 "ddgst": false, 00:21:08.082 "dhchap_key": "key2", 00:21:08.082 "allow_unrecognized_csi": false, 00:21:08.082 "method": "bdev_nvme_attach_controller", 00:21:08.082 "req_id": 1 00:21:08.082 } 00:21:08.082 Got JSON-RPC error response 00:21:08.082 response: 00:21:08.082 { 00:21:08.082 "code": -5, 00:21:08.082 "message": "Input/output error" 00:21:08.082 } 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.082 request: 00:21:08.082 { 00:21:08.082 "name": "nvme0", 00:21:08.082 "trtype": "tcp", 00:21:08.082 "traddr": "10.0.0.1", 00:21:08.082 "adrfam": "ipv4", 00:21:08.082 "trsvcid": "4420", 00:21:08.082 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:08.082 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:08.082 "prchk_reftag": false, 00:21:08.082 "prchk_guard": false, 00:21:08.082 "hdgst": false, 00:21:08.082 "ddgst": false, 00:21:08.082 "dhchap_key": "key1", 00:21:08.082 "dhchap_ctrlr_key": "ckey2", 00:21:08.082 "allow_unrecognized_csi": false, 00:21:08.082 "method": "bdev_nvme_attach_controller", 00:21:08.082 "req_id": 1 00:21:08.082 } 00:21:08.082 Got JSON-RPC error response 00:21:08.082 response: 00:21:08.082 { 00:21:08.082 "code": -5, 00:21:08.082 "message": "Input/output error" 00:21:08.082 } 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.082 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.083 nvme0n1 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.083 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.343 request: 00:21:08.343 { 00:21:08.343 "name": "nvme0", 00:21:08.343 "dhchap_key": "key1", 00:21:08.343 "dhchap_ctrlr_key": "ckey2", 00:21:08.343 "method": "bdev_nvme_set_keys", 00:21:08.343 "req_id": 1 00:21:08.343 } 00:21:08.343 Got JSON-RPC error response 00:21:08.343 response: 00:21:08.343 { 00:21:08.343 "code": -13, 00:21:08.343 "message": "Permission denied" 00:21:08.343 } 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:08.343 22:52:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:09.279 22:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.279 22:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:09.279 22:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.279 22:52:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.280 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWZjNmE0ZWRlYmYwNjBmMWMwNjM3YTUyYjlhNmE3MGRiNGU5NzJkNzdjMWNmMDg2RG3iIw==: 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: ]] 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGZjZWU2YmQ1YTVhYWM4MDI0ZDkyMzFlMmE1N2UyNDlkZDM4YThiMGIzNjdjZTdkCVrmZQ==: 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.538 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.538 nvme0n1 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGY1YzJiY2I2MTQ1MDFkNzg4Y2ZlZWM3NWE1MDViZjW69M9d: 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: ]] 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzhjZTlkOWQxOWE1ODAyZmE3ZDFlMzQ5ZDMwYzAyYjUyhhvR: 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.539 request: 00:21:09.539 { 00:21:09.539 "name": "nvme0", 00:21:09.539 "dhchap_key": "key2", 00:21:09.539 "dhchap_ctrlr_key": "ckey1", 00:21:09.539 "method": "bdev_nvme_set_keys", 00:21:09.539 "req_id": 1 00:21:09.539 } 00:21:09.539 Got JSON-RPC error response 00:21:09.539 response: 00:21:09.539 { 00:21:09.539 "code": -13, 00:21:09.539 "message": "Permission denied" 00:21:09.539 } 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:09.539 22:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.912 rmmod nvme_tcp 00:21:10.912 rmmod nvme_fabrics 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 92596 ']' 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 92596 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 92596 ']' 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 92596 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92596 00:21:10.912 killing process with pid 92596 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92596' 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 92596 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 92596 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:10.912 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:11.170 22:52:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:11.737 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:11.996 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:11.996 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:11.996 22:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.kju /tmp/spdk.key-null.O74 /tmp/spdk.key-sha256.uda /tmp/spdk.key-sha384.dYk /tmp/spdk.key-sha512.nvt /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:11.996 22:52:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:12.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:12.564 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:12.564 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:12.564 00:21:12.564 real 0m35.078s 00:21:12.564 user 0m32.442s 00:21:12.564 sys 0m3.865s 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.564 ************************************ 00:21:12.564 END TEST nvmf_auth_host 00:21:12.564 ************************************ 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.564 ************************************ 00:21:12.564 START TEST nvmf_digest 00:21:12.564 ************************************ 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:12.564 * Looking for test storage... 00:21:12.564 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:21:12.564 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:12.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.824 --rc genhtml_branch_coverage=1 00:21:12.824 --rc genhtml_function_coverage=1 00:21:12.824 --rc genhtml_legend=1 00:21:12.824 --rc geninfo_all_blocks=1 00:21:12.824 --rc geninfo_unexecuted_blocks=1 00:21:12.824 00:21:12.824 ' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:12.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.824 --rc genhtml_branch_coverage=1 00:21:12.824 --rc genhtml_function_coverage=1 00:21:12.824 --rc genhtml_legend=1 00:21:12.824 --rc geninfo_all_blocks=1 00:21:12.824 --rc geninfo_unexecuted_blocks=1 00:21:12.824 00:21:12.824 ' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:12.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.824 --rc genhtml_branch_coverage=1 00:21:12.824 --rc genhtml_function_coverage=1 00:21:12.824 --rc genhtml_legend=1 00:21:12.824 --rc geninfo_all_blocks=1 00:21:12.824 --rc geninfo_unexecuted_blocks=1 00:21:12.824 00:21:12.824 ' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:12.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.824 --rc genhtml_branch_coverage=1 00:21:12.824 --rc genhtml_function_coverage=1 00:21:12.824 --rc genhtml_legend=1 00:21:12.824 --rc geninfo_all_blocks=1 00:21:12.824 --rc geninfo_unexecuted_blocks=1 00:21:12.824 00:21:12.824 ' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.824 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.825 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:12.825 Cannot find device "nvmf_init_br" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:12.825 Cannot find device "nvmf_init_br2" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:12.825 Cannot find device "nvmf_tgt_br" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:12.825 Cannot find device "nvmf_tgt_br2" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:12.825 Cannot find device "nvmf_init_br" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:12.825 Cannot find device "nvmf_init_br2" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:12.825 Cannot find device "nvmf_tgt_br" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:12.825 Cannot find device "nvmf_tgt_br2" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:12.825 Cannot find device "nvmf_br" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:12.825 Cannot find device "nvmf_init_if" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:12.825 Cannot find device "nvmf_init_if2" 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:12.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:12.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:12.825 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:13.085 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:13.086 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:13.086 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:21:13.086 00:21:13.086 --- 10.0.0.3 ping statistics --- 00:21:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.086 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:13.086 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:13.086 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:21:13.086 00:21:13.086 --- 10.0.0.4 ping statistics --- 00:21:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.086 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:13.086 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.086 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:13.086 00:21:13.086 --- 10.0.0.1 ping statistics --- 00:21:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.086 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:13.086 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.086 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:21:13.086 00:21:13.086 --- 10.0.0.2 ping statistics --- 00:21:13.086 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.086 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:13.086 ************************************ 00:21:13.086 START TEST nvmf_digest_clean 00:21:13.086 ************************************ 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=94243 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 94243 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94243 ']' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.086 22:52:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.086 [2024-12-07 22:52:27.839984] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:13.086 [2024-12-07 22:52:27.840080] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.347 [2024-12-07 22:52:27.982087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.347 [2024-12-07 22:52:28.025983] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.347 [2024-12-07 22:52:28.026045] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.347 [2024-12-07 22:52:28.026058] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:13.347 [2024-12-07 22:52:28.026068] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:13.347 [2024-12-07 22:52:28.026077] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.347 [2024-12-07 22:52:28.026111] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.347 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.347 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:13.347 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:13.347 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:13.347 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.607 [2024-12-07 22:52:28.187244] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:13.607 null0 00:21:13.607 [2024-12-07 22:52:28.223944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.607 [2024-12-07 22:52:28.248070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94263 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94263 /var/tmp/bperf.sock 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94263 ']' 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.607 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:13.607 [2024-12-07 22:52:28.311895] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:13.607 [2024-12-07 22:52:28.311992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94263 ] 00:21:13.866 [2024-12-07 22:52:28.452305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.866 [2024-12-07 22:52:28.492769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.866 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.866 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:13.866 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:13.866 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:13.866 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:14.126 [2024-12-07 22:52:28.859708] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:14.385 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.385 22:52:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:14.643 nvme0n1 00:21:14.643 22:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:14.643 22:52:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:14.643 Running I/O for 2 seconds... 00:21:16.955 17526.00 IOPS, 68.46 MiB/s [2024-12-07T22:52:31.721Z] 17780.00 IOPS, 69.45 MiB/s 00:21:16.955 Latency(us) 00:21:16.955 [2024-12-07T22:52:31.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.955 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:16.955 nvme0n1 : 2.01 17802.49 69.54 0.00 0.00 7185.16 6642.97 17992.61 00:21:16.955 [2024-12-07T22:52:31.721Z] =================================================================================================================== 00:21:16.955 [2024-12-07T22:52:31.721Z] Total : 17802.49 69.54 0.00 0.00 7185.16 6642.97 17992.61 00:21:16.955 { 00:21:16.955 "results": [ 00:21:16.955 { 00:21:16.955 "job": "nvme0n1", 00:21:16.955 "core_mask": "0x2", 00:21:16.955 "workload": "randread", 00:21:16.955 "status": "finished", 00:21:16.955 "queue_depth": 128, 00:21:16.955 "io_size": 4096, 00:21:16.955 "runtime": 2.011797, 00:21:16.955 "iops": 17802.4920009325, 00:21:16.955 "mibps": 69.54098437864258, 00:21:16.955 "io_failed": 0, 00:21:16.955 "io_timeout": 0, 00:21:16.955 "avg_latency_us": 7185.157006739177, 00:21:16.955 "min_latency_us": 6642.967272727273, 00:21:16.955 "max_latency_us": 17992.61090909091 00:21:16.955 } 00:21:16.955 ], 00:21:16.955 "core_count": 1 00:21:16.955 } 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:16.956 | select(.opcode=="crc32c") 00:21:16.956 | "\(.module_name) \(.executed)"' 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94263 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94263 ']' 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94263 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94263 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:16.956 killing process with pid 94263 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94263' 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94263 00:21:16.956 Received shutdown signal, test time was about 2.000000 seconds 00:21:16.956 00:21:16.956 Latency(us) 00:21:16.956 [2024-12-07T22:52:31.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.956 [2024-12-07T22:52:31.722Z] =================================================================================================================== 00:21:16.956 [2024-12-07T22:52:31.722Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.956 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94263 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94310 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94310 /var/tmp/bperf.sock 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94310 ']' 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.215 22:52:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:17.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:17.215 Zero copy mechanism will not be used. 00:21:17.215 [2024-12-07 22:52:31.860767] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:17.215 [2024-12-07 22:52:31.860864] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94310 ] 00:21:17.475 [2024-12-07 22:52:31.998861] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.475 [2024-12-07 22:52:32.030795] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.475 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.475 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:17.475 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:17.475 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:17.475 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:17.735 [2024-12-07 22:52:32.320837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:17.735 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:17.735 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:17.995 nvme0n1 00:21:17.995 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:17.995 22:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:18.254 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:18.254 Zero copy mechanism will not be used. 00:21:18.254 Running I/O for 2 seconds... 00:21:20.177 8816.00 IOPS, 1102.00 MiB/s [2024-12-07T22:52:34.943Z] 8912.00 IOPS, 1114.00 MiB/s 00:21:20.177 Latency(us) 00:21:20.177 [2024-12-07T22:52:34.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.177 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:20.177 nvme0n1 : 2.00 8908.16 1113.52 0.00 0.00 1793.39 1601.16 7268.54 00:21:20.177 [2024-12-07T22:52:34.943Z] =================================================================================================================== 00:21:20.177 [2024-12-07T22:52:34.943Z] Total : 8908.16 1113.52 0.00 0.00 1793.39 1601.16 7268.54 00:21:20.177 { 00:21:20.177 "results": [ 00:21:20.177 { 00:21:20.177 "job": "nvme0n1", 00:21:20.177 "core_mask": "0x2", 00:21:20.177 "workload": "randread", 00:21:20.177 "status": "finished", 00:21:20.177 "queue_depth": 16, 00:21:20.177 "io_size": 131072, 00:21:20.177 "runtime": 2.002658, 00:21:20.177 "iops": 8908.161053959288, 00:21:20.177 "mibps": 1113.520131744911, 00:21:20.177 "io_failed": 0, 00:21:20.177 "io_timeout": 0, 00:21:20.177 "avg_latency_us": 1793.391562984101, 00:21:20.177 "min_latency_us": 1601.1636363636364, 00:21:20.177 "max_latency_us": 7268.538181818182 00:21:20.177 } 00:21:20.177 ], 00:21:20.177 "core_count": 1 00:21:20.177 } 00:21:20.177 22:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:20.177 22:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:20.177 22:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:20.177 22:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:20.177 22:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:20.177 | select(.opcode=="crc32c") 00:21:20.177 | "\(.module_name) \(.executed)"' 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94310 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94310 ']' 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94310 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94310 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:20.438 killing process with pid 94310 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94310' 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94310 00:21:20.438 Received shutdown signal, test time was about 2.000000 seconds 00:21:20.438 00:21:20.438 Latency(us) 00:21:20.438 [2024-12-07T22:52:35.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.438 [2024-12-07T22:52:35.204Z] =================================================================================================================== 00:21:20.438 [2024-12-07T22:52:35.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.438 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94310 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94363 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94363 /var/tmp/bperf.sock 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94363 ']' 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.698 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:20.698 [2024-12-07 22:52:35.290397] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:20.698 [2024-12-07 22:52:35.290493] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94363 ] 00:21:20.698 [2024-12-07 22:52:35.426642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.698 [2024-12-07 22:52:35.458272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.956 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.956 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:20.956 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:20.956 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:20.956 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:21.216 [2024-12-07 22:52:35.812499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:21.216 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:21.216 22:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:21.475 nvme0n1 00:21:21.475 22:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:21.475 22:52:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:21.475 Running I/O for 2 seconds... 00:21:23.792 19305.00 IOPS, 75.41 MiB/s [2024-12-07T22:52:38.558Z] 19177.50 IOPS, 74.91 MiB/s 00:21:23.792 Latency(us) 00:21:23.792 [2024-12-07T22:52:38.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.792 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:23.792 nvme0n1 : 2.00 19215.14 75.06 0.00 0.00 6655.32 5451.40 14537.08 00:21:23.792 [2024-12-07T22:52:38.558Z] =================================================================================================================== 00:21:23.792 [2024-12-07T22:52:38.558Z] Total : 19215.14 75.06 0.00 0.00 6655.32 5451.40 14537.08 00:21:23.792 { 00:21:23.792 "results": [ 00:21:23.792 { 00:21:23.792 "job": "nvme0n1", 00:21:23.792 "core_mask": "0x2", 00:21:23.792 "workload": "randwrite", 00:21:23.792 "status": "finished", 00:21:23.792 "queue_depth": 128, 00:21:23.792 "io_size": 4096, 00:21:23.792 "runtime": 2.002744, 00:21:23.792 "iops": 19215.13683226613, 00:21:23.792 "mibps": 75.05912825103957, 00:21:23.792 "io_failed": 0, 00:21:23.792 "io_timeout": 0, 00:21:23.792 "avg_latency_us": 6655.318563805033, 00:21:23.792 "min_latency_us": 5451.403636363636, 00:21:23.792 "max_latency_us": 14537.076363636364 00:21:23.792 } 00:21:23.792 ], 00:21:23.792 "core_count": 1 00:21:23.792 } 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:23.792 | select(.opcode=="crc32c") 00:21:23.792 | "\(.module_name) \(.executed)"' 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94363 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94363 ']' 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94363 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.792 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94363 00:21:24.052 killing process with pid 94363 00:21:24.052 Received shutdown signal, test time was about 2.000000 seconds 00:21:24.052 00:21:24.052 Latency(us) 00:21:24.052 [2024-12-07T22:52:38.818Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.052 [2024-12-07T22:52:38.818Z] =================================================================================================================== 00:21:24.052 [2024-12-07T22:52:38.818Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94363' 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94363 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94363 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94412 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94412 /var/tmp/bperf.sock 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94412 ']' 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:24.052 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.053 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:24.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:24.053 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.053 22:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:24.053 [2024-12-07 22:52:38.764528] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:24.053 [2024-12-07 22:52:38.764645] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94412 ] 00:21:24.053 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.053 Zero copy mechanism will not be used. 00:21:24.312 [2024-12-07 22:52:38.893768] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.312 [2024-12-07 22:52:38.925556] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.249 22:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:25.249 22:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:25.249 22:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:25.249 22:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:25.249 22:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:25.249 [2024-12-07 22:52:39.971180] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:25.249 22:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:25.249 22:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:25.817 nvme0n1 00:21:25.817 22:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:25.817 22:52:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:25.817 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:25.817 Zero copy mechanism will not be used. 00:21:25.817 Running I/O for 2 seconds... 00:21:27.699 7249.00 IOPS, 906.12 MiB/s [2024-12-07T22:52:42.465Z] 7271.50 IOPS, 908.94 MiB/s 00:21:27.699 Latency(us) 00:21:27.699 [2024-12-07T22:52:42.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.699 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:27.699 nvme0n1 : 2.00 7267.28 908.41 0.00 0.00 2197.03 1765.00 11081.54 00:21:27.699 [2024-12-07T22:52:42.465Z] =================================================================================================================== 00:21:27.699 [2024-12-07T22:52:42.465Z] Total : 7267.28 908.41 0.00 0.00 2197.03 1765.00 11081.54 00:21:27.699 { 00:21:27.699 "results": [ 00:21:27.699 { 00:21:27.699 "job": "nvme0n1", 00:21:27.699 "core_mask": "0x2", 00:21:27.699 "workload": "randwrite", 00:21:27.699 "status": "finished", 00:21:27.699 "queue_depth": 16, 00:21:27.699 "io_size": 131072, 00:21:27.699 "runtime": 2.003364, 00:21:27.699 "iops": 7267.276441026194, 00:21:27.699 "mibps": 908.4095551282743, 00:21:27.699 "io_failed": 0, 00:21:27.699 "io_timeout": 0, 00:21:27.699 "avg_latency_us": 2197.0339481357987, 00:21:27.699 "min_latency_us": 1765.0036363636364, 00:21:27.699 "max_latency_us": 11081.541818181819 00:21:27.699 } 00:21:27.699 ], 00:21:27.699 "core_count": 1 00:21:27.699 } 00:21:27.699 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:27.699 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:27.699 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:27.699 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:27.699 | select(.opcode=="crc32c") 00:21:27.699 | "\(.module_name) \(.executed)"' 00:21:27.700 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94412 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94412 ']' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94412 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94412 00:21:28.267 killing process with pid 94412 00:21:28.267 Received shutdown signal, test time was about 2.000000 seconds 00:21:28.267 00:21:28.267 Latency(us) 00:21:28.267 [2024-12-07T22:52:43.033Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.267 [2024-12-07T22:52:43.033Z] =================================================================================================================== 00:21:28.267 [2024-12-07T22:52:43.033Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94412' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94412 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94412 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94243 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94243 ']' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94243 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94243 00:21:28.267 killing process with pid 94243 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94243' 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94243 00:21:28.267 22:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94243 00:21:28.525 00:21:28.525 real 0m15.276s 00:21:28.525 user 0m30.054s 00:21:28.525 sys 0m4.205s 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:28.526 ************************************ 00:21:28.526 END TEST nvmf_digest_clean 00:21:28.526 ************************************ 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:28.526 ************************************ 00:21:28.526 START TEST nvmf_digest_error 00:21:28.526 ************************************ 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=94491 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 94491 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94491 ']' 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:28.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:28.526 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.526 [2024-12-07 22:52:43.171477] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:28.526 [2024-12-07 22:52:43.171574] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.784 [2024-12-07 22:52:43.311083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.784 [2024-12-07 22:52:43.346799] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.784 [2024-12-07 22:52:43.346895] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.784 [2024-12-07 22:52:43.346908] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.784 [2024-12-07 22:52:43.346916] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.784 [2024-12-07 22:52:43.346923] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.784 [2024-12-07 22:52:43.346950] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.784 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.785 [2024-12-07 22:52:43.459367] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:28.785 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.785 [2024-12-07 22:52:43.495070] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:28.785 null0 00:21:28.785 [2024-12-07 22:52:43.526790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.044 [2024-12-07 22:52:43.550968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94520 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94520 /var/tmp/bperf.sock 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94520 ']' 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:29.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.044 22:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:29.044 [2024-12-07 22:52:43.599970] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:29.044 [2024-12-07 22:52:43.600067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94520 ] 00:21:29.044 [2024-12-07 22:52:43.728495] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.044 [2024-12-07 22:52:43.760658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.044 [2024-12-07 22:52:43.787979] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:29.981 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.981 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:29.981 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:29.981 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:30.240 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:30.240 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.240 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:30.240 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.240 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:30.240 22:52:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:30.506 nvme0n1 00:21:30.506 22:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:30.506 22:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.506 22:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:30.506 22:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.506 22:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:30.507 22:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:30.507 Running I/O for 2 seconds... 00:21:30.769 [2024-12-07 22:52:45.279490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.279575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.279594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.294016] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.294066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.294093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.308037] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.308086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.308113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.321904] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.321951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.321979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.335908] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.335955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.335983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.349781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.349839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.349869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.363747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.363795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.363823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.377734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.769 [2024-12-07 22:52:45.377782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.769 [2024-12-07 22:52:45.377810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.769 [2024-12-07 22:52:45.391845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.391901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.391929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.405808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.405857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.405893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.419838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.419910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.419922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.433680] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.433727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.433754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.447614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.447661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.447689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.461692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.461739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.461766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.475739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.475789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.475817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.489636] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.489683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.489710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.503647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.503694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.503721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.517650] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.517699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.517726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:30.770 [2024-12-07 22:52:45.532241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:30.770 [2024-12-07 22:52:45.532305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.770 [2024-12-07 22:52:45.532317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.546955] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.547020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.547047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.560979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.561025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.561052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.574960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.575009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.575037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.588730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.588778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.588804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.602582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.602630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.602657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.617006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.617053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.617080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.631289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.631335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.631362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.645065] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.645113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.645140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.659038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.659101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.659128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.673042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.673090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.673117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.686930] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.686979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.687007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.700788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.700836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.700863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.714778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.714827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.714854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.729755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.729804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.729831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.746033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.746067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.746095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.028 [2024-12-07 22:52:45.762380] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.028 [2024-12-07 22:52:45.762413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.028 [2024-12-07 22:52:45.762441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.029 [2024-12-07 22:52:45.777915] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.029 [2024-12-07 22:52:45.777947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.029 [2024-12-07 22:52:45.777974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.793485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.793518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.793530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.808916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.808949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.808977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.824030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.824062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.824089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.838949] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.838983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.839010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.853805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.853839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.853865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.868854] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.868910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.868938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.883995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.884027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.884054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.898811] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.898846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.898874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.913901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.913943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.913970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.928088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.928136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.928163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.941914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.941962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.941988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.955974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.956021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.956056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.969911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.969959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.969986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.983877] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.983935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.983963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:45.997844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:45.997915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:45.997927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:46.012078] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:46.012126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:46.012153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:46.026133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:46.026182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:46.026210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.287 [2024-12-07 22:52:46.040168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.287 [2024-12-07 22:52:46.040216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.287 [2024-12-07 22:52:46.040243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.055347] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.055395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.055422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.069359] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.069408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.069435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.083410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.083458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.083486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.097295] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.097343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.097371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.111388] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.111436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.111463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.125312] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.125360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.125388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.139282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.139330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.139357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.153051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.153099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.153127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.166990] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.167039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.167067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.546 [2024-12-07 22:52:46.187890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.546 [2024-12-07 22:52:46.187931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.546 [2024-12-07 22:52:46.187960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 [2024-12-07 22:52:46.204671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.204718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.204745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 [2024-12-07 22:52:46.220400] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.220447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.220474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 [2024-12-07 22:52:46.235817] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.235865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.235917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 [2024-12-07 22:52:46.250484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.250531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.250558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 17458.00 IOPS, 68.20 MiB/s [2024-12-07T22:52:46.313Z] [2024-12-07 22:52:46.266189] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.266237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.266264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 [2024-12-07 22:52:46.280259] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.280322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.280350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 [2024-12-07 22:52:46.294932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.294981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.295008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.547 [2024-12-07 22:52:46.309547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.547 [2024-12-07 22:52:46.309596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.547 [2024-12-07 22:52:46.309623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.324448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.324495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.324522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.338608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.338657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.338723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.352732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.352779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.352806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.366857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.366915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.366943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.380852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.380909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.380937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.394867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.394926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.394954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.409066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.409113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.409140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.423201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.423246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.423272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.437154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.437201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.437228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.451250] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.451296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.451322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.465245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.465292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.465319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.479369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.479415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.479442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.493481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.493530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.493557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.507639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.507687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.507715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.521823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.521894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.521906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.536464] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.806 [2024-12-07 22:52:46.536513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.806 [2024-12-07 22:52:46.536540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.806 [2024-12-07 22:52:46.550597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.807 [2024-12-07 22:52:46.550643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.807 [2024-12-07 22:52:46.550693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:31.807 [2024-12-07 22:52:46.564743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:31.807 [2024-12-07 22:52:46.564789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.807 [2024-12-07 22:52:46.564816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.580042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.580089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.580116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.594067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.594113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.594141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.608403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.608450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.608480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.622429] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.622476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.622503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.636562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.636609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.636636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.650555] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.650601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.650628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.664622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.664669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.664696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.678658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.678727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.678755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.692743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.692790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.692816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.706849] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.706905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.706934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.721273] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.721320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.721347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.736047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.736094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.736121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.750351] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.750397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.764591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.764638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.764665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.778818] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.778868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.778906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.792819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.792865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.792918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.807102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.807149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.067 [2024-12-07 22:52:46.807176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.067 [2024-12-07 22:52:46.821159] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.067 [2024-12-07 22:52:46.821207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.068 [2024-12-07 22:52:46.821234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.836567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.836615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.836642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.850705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.850755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.850783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.864853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.864925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.864952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.879162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.879207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:19907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.879234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.893146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.893193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.893220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.907229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.907275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.907302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.922511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.922542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.922552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.939112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.939148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.939175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.954212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.954244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.954270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.969254] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.969286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.969312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.984044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.984075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.984102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:46.998891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:46.998933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:46.998961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:47.014011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:47.014043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:47.014070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:47.028913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:47.028944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:47.028971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:47.043907] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:47.043948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:47.043976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:47.058582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:47.058614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:47.058641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:47.073541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:47.073572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.327 [2024-12-07 22:52:47.073599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.327 [2024-12-07 22:52:47.088618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.327 [2024-12-07 22:52:47.088649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.328 [2024-12-07 22:52:47.088676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.586 [2024-12-07 22:52:47.104496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.586 [2024-12-07 22:52:47.104528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.586 [2024-12-07 22:52:47.104556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.586 [2024-12-07 22:52:47.125678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.586 [2024-12-07 22:52:47.125730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.586 [2024-12-07 22:52:47.125742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.139767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.139815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.139843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.153733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.153783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.153810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.167800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.167847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.167874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.181615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.181662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.181688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.195572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.195620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.195647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.211333] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.211364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.211391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.228098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.228136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.228148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.243793] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.243841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.243869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 [2024-12-07 22:52:47.260127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c90510) 00:21:32.587 [2024-12-07 22:52:47.260176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.587 [2024-12-07 22:52:47.260204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.587 17458.00 IOPS, 68.20 MiB/s 00:21:32.587 Latency(us) 00:21:32.587 [2024-12-07T22:52:47.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.587 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:32.587 nvme0n1 : 2.01 17487.79 68.31 0.00 0.00 7314.29 6672.76 30742.34 00:21:32.587 [2024-12-07T22:52:47.353Z] =================================================================================================================== 00:21:32.587 [2024-12-07T22:52:47.353Z] Total : 17487.79 68.31 0.00 0.00 7314.29 6672.76 30742.34 00:21:32.587 { 00:21:32.587 "results": [ 00:21:32.587 { 00:21:32.587 "job": "nvme0n1", 00:21:32.587 "core_mask": "0x2", 00:21:32.587 "workload": "randread", 00:21:32.587 "status": "finished", 00:21:32.587 "queue_depth": 128, 00:21:32.587 "io_size": 4096, 00:21:32.587 "runtime": 2.011117, 00:21:32.587 "iops": 17487.794096514524, 00:21:32.587 "mibps": 68.31169568950986, 00:21:32.587 "io_failed": 0, 00:21:32.587 "io_timeout": 0, 00:21:32.587 "avg_latency_us": 7314.286924289812, 00:21:32.587 "min_latency_us": 6672.756363636364, 00:21:32.587 "max_latency_us": 30742.34181818182 00:21:32.587 } 00:21:32.587 ], 00:21:32.587 "core_count": 1 00:21:32.587 } 00:21:32.587 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:32.587 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:32.587 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:32.587 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:32.587 | .driver_specific 00:21:32.587 | .nvme_error 00:21:32.587 | .status_code 00:21:32.587 | .command_transient_transport_error' 00:21:32.846 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 137 > 0 )) 00:21:32.846 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94520 00:21:32.846 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94520 ']' 00:21:32.846 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94520 00:21:32.846 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:32.846 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:32.846 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94520 00:21:32.847 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:32.847 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:32.847 killing process with pid 94520 00:21:32.847 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94520' 00:21:32.847 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94520 00:21:32.847 Received shutdown signal, test time was about 2.000000 seconds 00:21:32.847 00:21:32.847 Latency(us) 00:21:32.847 [2024-12-07T22:52:47.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.847 [2024-12-07T22:52:47.613Z] =================================================================================================================== 00:21:32.847 [2024-12-07T22:52:47.613Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:32.847 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94520 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94575 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94575 /var/tmp/bperf.sock 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94575 ']' 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:33.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:33.106 22:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.106 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:33.106 Zero copy mechanism will not be used. 00:21:33.106 [2024-12-07 22:52:47.773209] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:33.106 [2024-12-07 22:52:47.773309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94575 ] 00:21:33.365 [2024-12-07 22:52:47.900709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.365 [2024-12-07 22:52:47.932787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.365 [2024-12-07 22:52:47.959374] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:33.365 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.365 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:33.365 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:33.365 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:33.622 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:33.622 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.622 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.622 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.622 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:33.622 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:33.881 nvme0n1 00:21:33.881 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:33.881 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.881 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:33.881 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.881 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:33.881 22:52:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:34.141 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:34.141 Zero copy mechanism will not be used. 00:21:34.141 Running I/O for 2 seconds... 00:21:34.141 [2024-12-07 22:52:48.674898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.675003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.675033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.678845] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.678911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.678941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.682803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.682856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.682896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.686618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.686690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.686719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.690573] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.690623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.690651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.694507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.694558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.694585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.698451] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.698501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.698528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.702340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.702388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.702416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.141 [2024-12-07 22:52:48.706176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.141 [2024-12-07 22:52:48.706224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.141 [2024-12-07 22:52:48.706251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.709981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.710029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.710056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.713821] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.713896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.713909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.717688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.717736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.717764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.721590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.721640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.721667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.725492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.725541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.725568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.729446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.729495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.729521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.733311] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.733359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.733386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.737144] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.737194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.737221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.741026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.741077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.741104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.744997] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.745046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.745073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.748809] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.748858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.748909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.752744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.752793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.752820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.756640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.756690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.756716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.760603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.760652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.760679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.764511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.764560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.764587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.768474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.768524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.768551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.772502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.772550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.772577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.776419] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.776468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.776495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.780468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.780517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.780544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.784485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.784533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.784560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.788454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.788502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.788528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.792304] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.792352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.792378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.796155] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.796203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.796230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.799981] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.800029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.800055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.803871] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.803927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.803955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.807791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.807839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.807867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.142 [2024-12-07 22:52:48.811666] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.142 [2024-12-07 22:52:48.811714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.142 [2024-12-07 22:52:48.811741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.815642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.815690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.815717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.819612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.819660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.819686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.823513] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.823562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.823589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.827458] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.827506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.827534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.831408] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.831457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.831484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.835341] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.835389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.835417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.839166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.839213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.839240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.843070] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.843117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.843144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.846852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.846912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.846940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.850694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.850759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.850787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.854578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.854627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.854654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.858432] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.858481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.858508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.862302] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.862351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.862378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.866239] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.866287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.866314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.870285] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.870336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.870363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.874217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.874265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.874292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.878051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.878100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.878127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.881911] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.881959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.881986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.885794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.885842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.885868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.889699] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.889747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.889774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.893609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.893657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.893684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.897501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.897550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.897577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.143 [2024-12-07 22:52:48.901611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.143 [2024-12-07 22:52:48.901676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.143 [2024-12-07 22:52:48.901687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.906082] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.404 [2024-12-07 22:52:48.906132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.404 [2024-12-07 22:52:48.906159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.910044] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.404 [2024-12-07 22:52:48.910110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.404 [2024-12-07 22:52:48.910121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.914072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.404 [2024-12-07 22:52:48.914123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.404 [2024-12-07 22:52:48.914149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.918208] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.404 [2024-12-07 22:52:48.918259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.404 [2024-12-07 22:52:48.918287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.922229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.404 [2024-12-07 22:52:48.922277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.404 [2024-12-07 22:52:48.922305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.926119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.404 [2024-12-07 22:52:48.926168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.404 [2024-12-07 22:52:48.926197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.929967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.404 [2024-12-07 22:52:48.930016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.404 [2024-12-07 22:52:48.930043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.404 [2024-12-07 22:52:48.933797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.933845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.933872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.937684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.937733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.937760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.941638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.941687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.941714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.945620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.945669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.945697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.949491] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.949539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.949567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.953397] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.953445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.953472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.957220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.957268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.957295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.961105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.961153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.961180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.965054] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.965103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.965130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.969060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.969108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.969135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.972950] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.972999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.973025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.976787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.976836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.976863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.980727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.980775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.980802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.984717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.984766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.984794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.988629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.988678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.988705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.992602] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.992651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.992678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:48.996504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:48.996553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:48.996579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.000404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.000452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.000479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.004289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.004339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.004365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.008245] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.008294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.008320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.012088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.012135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.012162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.016089] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.016137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.016164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.019967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.020016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.020043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.023869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.023926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.023954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.027795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.027844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.027872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.031679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.031727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.031754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.405 [2024-12-07 22:52:49.035609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.405 [2024-12-07 22:52:49.035659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.405 [2024-12-07 22:52:49.035685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.039642] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.039691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.039720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.043556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.043605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.043632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.047549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.047598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.047625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.051473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.051521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.051548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.055362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.055411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.055438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.059201] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.059248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.059275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.063275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.063322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.063349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.067366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.067413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.067440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.071308] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.071356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.071383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.075166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.075214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.075242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.078968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.079032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.079059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.082814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.082866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.082907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.086598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.086646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.086696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.090571] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.090619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.090645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.094609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.094659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.094710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.098501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.098551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.098578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.102427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.102476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.102503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.106375] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.106423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.106451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.110271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.110320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.110347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.114154] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.114203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.114230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.118112] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.118161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.118188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.122008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.122057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.122085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.125799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.125847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.125874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.129719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.129767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.129794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.133620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.133668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.133696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.137658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.137706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.137733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.141580] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.406 [2024-12-07 22:52:49.141628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.406 [2024-12-07 22:52:49.141655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.406 [2024-12-07 22:52:49.145511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.407 [2024-12-07 22:52:49.145560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.407 [2024-12-07 22:52:49.145588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.407 [2024-12-07 22:52:49.149366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.407 [2024-12-07 22:52:49.149414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.407 [2024-12-07 22:52:49.149441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.407 [2024-12-07 22:52:49.153223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.407 [2024-12-07 22:52:49.153271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.407 [2024-12-07 22:52:49.153297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.407 [2024-12-07 22:52:49.157090] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.407 [2024-12-07 22:52:49.157138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.407 [2024-12-07 22:52:49.157165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.407 [2024-12-07 22:52:49.161067] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.407 [2024-12-07 22:52:49.161114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.407 [2024-12-07 22:52:49.161141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.407 [2024-12-07 22:52:49.165217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.407 [2024-12-07 22:52:49.165266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.407 [2024-12-07 22:52:49.165293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.667 [2024-12-07 22:52:49.169626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.667 [2024-12-07 22:52:49.169675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.667 [2024-12-07 22:52:49.169702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.667 [2024-12-07 22:52:49.173828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.667 [2024-12-07 22:52:49.173905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.667 [2024-12-07 22:52:49.173920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.667 [2024-12-07 22:52:49.177733] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.667 [2024-12-07 22:52:49.177781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.667 [2024-12-07 22:52:49.177808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.667 [2024-12-07 22:52:49.181587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.667 [2024-12-07 22:52:49.181634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.667 [2024-12-07 22:52:49.181661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.667 [2024-12-07 22:52:49.185473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.667 [2024-12-07 22:52:49.185521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.667 [2024-12-07 22:52:49.185548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.667 [2024-12-07 22:52:49.189369] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.667 [2024-12-07 22:52:49.189417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.189444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.193276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.193324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.193352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.197081] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.197140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.197168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.200956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.201004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.201031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.204835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.204906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.204919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.208801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.208849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.208875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.212609] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.212657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.212684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.216667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.216714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.216741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.220601] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.220649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.220675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.224511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.224558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.224585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.228485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.228533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.228559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.232514] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.232562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.232589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.236338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.236386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.236414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.240126] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.240174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.240201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.243884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.243943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.243970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.248134] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.248186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.248198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.252326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.252375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.252402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.256695] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.256759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.256787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.261136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.261174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.261186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.265747] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.265796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.265807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.270438] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.270486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.270513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.274723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.274761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.274775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.279100] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.279135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.279147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.283248] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.283294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.283322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.287444] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.287490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.287517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.291512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.291560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.291587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.295654] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.668 [2024-12-07 22:52:49.295702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.668 [2024-12-07 22:52:49.295728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.668 [2024-12-07 22:52:49.299611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.299658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.299685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.303936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.303983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.304011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.308301] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.308350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.308377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.312603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.312652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.312680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.317153] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.317189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.317233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.321411] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.321449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.321477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.325934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.325981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.326009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.330187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.330237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.330265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.334416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.334449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.334475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.338499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.338533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.338559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.342612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.342646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.342697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.346780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.346819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.346848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.350976] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.351040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.351068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.355110] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.355143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.355169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.359086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.359119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.359146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.362933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.362969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.363012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.367005] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.367055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.367096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.371346] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.371379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.371391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.375195] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.375227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.375253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.379103] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.379134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.379161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.383068] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.383115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.383141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.386892] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.386938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.386966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.391147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.391180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.391206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.395107] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.395140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.395166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.398974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.399023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.399050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.402909] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.402946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.402974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.407038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.669 [2024-12-07 22:52:49.407087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.669 [2024-12-07 22:52:49.407114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.669 [2024-12-07 22:52:49.411281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.670 [2024-12-07 22:52:49.411314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.670 [2024-12-07 22:52:49.411341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.670 [2024-12-07 22:52:49.415204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.670 [2024-12-07 22:52:49.415237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.670 [2024-12-07 22:52:49.415264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.670 [2024-12-07 22:52:49.419199] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.670 [2024-12-07 22:52:49.419232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.670 [2024-12-07 22:52:49.419259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.670 [2024-12-07 22:52:49.423132] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.670 [2024-12-07 22:52:49.423165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.670 [2024-12-07 22:52:49.423192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.670 [2024-12-07 22:52:49.427356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.670 [2024-12-07 22:52:49.427390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.670 [2024-12-07 22:52:49.427402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.431899] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.431942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.431954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.436052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.436086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.436113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.440234] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.440267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.440294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.444212] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.444246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.444273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.448247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.448283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.448310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.452439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.452474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.452501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.456472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.456506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.456533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.460512] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.460546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.460573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.464532] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.464566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.464593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.468732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.468767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.468795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.472961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.472995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.473022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.476996] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.477029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.930 [2024-12-07 22:52:49.477056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.930 [2024-12-07 22:52:49.481055] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.930 [2024-12-07 22:52:49.481104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.481132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.485223] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.485272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.485299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.489331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.489364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.489392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.493338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.493386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.493414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.497322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.497356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.497383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.501446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.501481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.501524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.505480] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.505513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.505541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.509525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.509558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.509585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.513595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.513628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.513656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.517787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.517822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.517851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.521768] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.521818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.521846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.525681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.525729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.525756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.529651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.529700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.529728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.533523] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.533572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.533600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.537426] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.537475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.537502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.541342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.541391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.541420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.545123] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.545172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.545199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.548984] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.549033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.549060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.552788] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.552837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.552865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.556731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.556780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.556808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.560710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.560759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.560787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.564638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.564688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.564715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.568611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.568660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.568688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.572575] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.572624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.572652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.576501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.576550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.576578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.580395] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.580443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.580471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.931 [2024-12-07 22:52:49.584263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.931 [2024-12-07 22:52:49.584314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.931 [2024-12-07 22:52:49.584341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.588167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.588216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.588244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.592035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.592084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.592112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.595858] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.595918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.595946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.599668] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.599717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.599744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.603538] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.603587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.603615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.607487] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.607536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.607563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.611263] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.611311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.611339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.615094] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.615144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.615172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.618866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.618928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.618958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.622637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.622709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.622753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.626496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.626545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.626573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.630355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.630403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.630430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.634221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.634271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.634298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.638003] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.638052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.638079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.641790] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.641840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.641867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.645704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.645753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.645780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.649599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.649648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.649676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.653534] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.653584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.653612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.657427] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.657477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.657505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.661309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.661359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.661386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.665072] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.665121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.665148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.932 7766.00 IOPS, 970.75 MiB/s [2024-12-07T22:52:49.698Z] [2024-12-07 22:52:49.670087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.670136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.670163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.673946] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.673995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.674022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.677769] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.677819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.677846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.681653] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.681703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.681731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.685590] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.685639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.932 [2024-12-07 22:52:49.685668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.932 [2024-12-07 22:52:49.689710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:34.932 [2024-12-07 22:52:49.689775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.933 [2024-12-07 22:52:49.689787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.694031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.694113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.694125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.697967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.698016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.702042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.702090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.702118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.705900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.705948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.705976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.709806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.709855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.709883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.713736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.713784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.713811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.717763] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.717842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.721675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.721724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.721751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.725711] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.725760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.725788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.193 [2024-12-07 22:52:49.729688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.193 [2024-12-07 22:52:49.729737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.193 [2024-12-07 22:52:49.729764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.733671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.733719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.733747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.737611] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.737661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.737688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.741530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.741579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.741607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.745452] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.745502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.745529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.749324] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.749373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.749399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.753140] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.753188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.753215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.757011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.757059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.757087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.760807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.760838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.760882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.764568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.764616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.764643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.768507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.768556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.768584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.772437] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.772486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.772514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.776275] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.776325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.776353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.780289] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.780337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.784226] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.784258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.784269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.788083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.788131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.788158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.791945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.791993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.792020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.795714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.795763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.795792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.799578] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.799626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.803494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.803543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.803570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.807474] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.807523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.807551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.811340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.194 [2024-12-07 22:52:49.811389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.194 [2024-12-07 22:52:49.811416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.194 [2024-12-07 22:52:49.815147] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.815197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.815225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.819088] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.819135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.819162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.822893] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.822954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.822982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.826693] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.826756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.826784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.830603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.830652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.830702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.834627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.834701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.834744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.838510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.838559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.838587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.842364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.842413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.842441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.846197] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.846246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.846275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.849974] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.850021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.850049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.853736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.853784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.853812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.857616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.857665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.857692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.861526] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.861575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.861602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.865382] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.865432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.865459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.869320] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.869369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.869397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.873355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.873404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.873431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.877218] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.877267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.877294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.881108] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.881156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.881183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.885052] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.885101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.885129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.888890] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.888938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.888965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.892686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.892735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.892762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.896616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.896666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.896694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.900539] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.900588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.900616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.904433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.904483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.904510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.908297] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.908345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.195 [2024-12-07 22:52:49.908373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.195 [2024-12-07 22:52:49.912152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.195 [2024-12-07 22:52:49.912201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.912229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.915954] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.916002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.916029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.919860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.919920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.919948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.923661] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.923709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.923736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.927652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.927718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.927730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.931511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.931561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.931588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.935403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.935454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.935496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.939221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.939286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.939313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.943028] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.943090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.943118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.946798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.946849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.946877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.950567] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.950617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.950644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.196 [2024-12-07 22:52:49.954901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.196 [2024-12-07 22:52:49.954952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.196 [2024-12-07 22:52:49.954965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.456 [2024-12-07 22:52:49.959136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.456 [2024-12-07 22:52:49.959187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.456 [2024-12-07 22:52:49.959214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.456 [2024-12-07 22:52:49.963119] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.456 [2024-12-07 22:52:49.963169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.456 [2024-12-07 22:52:49.963181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.456 [2024-12-07 22:52:49.966875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.456 [2024-12-07 22:52:49.966936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.456 [2024-12-07 22:52:49.966965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.456 [2024-12-07 22:52:49.970795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.456 [2024-12-07 22:52:49.970847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.456 [2024-12-07 22:52:49.970875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.456 [2024-12-07 22:52:49.974596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.456 [2024-12-07 22:52:49.974644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.456 [2024-12-07 22:52:49.974696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.456 [2024-12-07 22:52:49.978447] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.456 [2024-12-07 22:52:49.978497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:49.978524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:49.982321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:49.982370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:49.982398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:49.986156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:49.986205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:49.986232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:49.990006] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:49.990038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:49.990066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:49.993830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:49.993902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:49.993915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:49.997735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:49.997784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:49.997812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.001880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.001957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.001993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.007621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.007707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.007739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.013363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.013452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.013472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.019541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.019655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.019677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.025552] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.025613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.025633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.030599] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.030641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.030697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.034901] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.034941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.034956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.039176] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.039227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.039255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.043258] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.043309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.043337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.047323] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.047372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.047399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.051168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.051218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.051245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.055080] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.055143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.055170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.058859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.058922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.058951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.062645] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.062720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.062750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.066502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.066577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.070393] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.070442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.070469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.074267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.074316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.074343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.078216] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.078265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.078292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.082066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.082115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.082142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.086025] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.086073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.086100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.089913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.089961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.457 [2024-12-07 22:52:50.089988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.457 [2024-12-07 22:52:50.093767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.457 [2024-12-07 22:52:50.093816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.093843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.097689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.097739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.097766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.101664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.101713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.101740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.105591] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.105640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.105668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.109492] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.109541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.109568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.113321] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.113370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.113397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.117221] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.117270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.117297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.121122] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.121171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.121198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.125001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.125049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.125077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.128913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.128960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.128987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.132867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.132945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.132981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.136914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.136962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.136989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.140780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.140829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.140856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.144782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.144831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.144858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.148738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.148788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.148815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.152640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.152689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.152716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.156548] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.156598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.156625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.160501] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.160550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.160577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.164413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.164461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.164488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.168306] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.168355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.168382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.172162] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.172223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.172265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.176073] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.176134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.176161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.179956] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.180004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.180032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.183774] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.183822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.183849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.187767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.187816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.187843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.191709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.191758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.191786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.195672] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.458 [2024-12-07 22:52:50.195721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.458 [2024-12-07 22:52:50.195748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.458 [2024-12-07 22:52:50.199669] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.459 [2024-12-07 22:52:50.199859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.459 [2024-12-07 22:52:50.199937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.459 [2024-12-07 22:52:50.203618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.459 [2024-12-07 22:52:50.203647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.459 [2024-12-07 22:52:50.203674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.459 [2024-12-07 22:52:50.207639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.459 [2024-12-07 22:52:50.207854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.459 [2024-12-07 22:52:50.208020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.459 [2024-12-07 22:52:50.211775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.459 [2024-12-07 22:52:50.212019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.459 [2024-12-07 22:52:50.212138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.459 [2024-12-07 22:52:50.216303] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.459 [2024-12-07 22:52:50.216521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.459 [2024-12-07 22:52:50.216656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.719 [2024-12-07 22:52:50.221171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.719 [2024-12-07 22:52:50.221383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.719 [2024-12-07 22:52:50.221504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.719 [2024-12-07 22:52:50.225556] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.719 [2024-12-07 22:52:50.225781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.719 [2024-12-07 22:52:50.226067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.719 [2024-12-07 22:52:50.230087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.719 [2024-12-07 22:52:50.230296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.719 [2024-12-07 22:52:50.230424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.719 [2024-12-07 22:52:50.234344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.719 [2024-12-07 22:52:50.234545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.719 [2024-12-07 22:52:50.234661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.719 [2024-12-07 22:52:50.238838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.239098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.239348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.243506] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.243705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.243825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.247860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.248105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.248210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.252196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.252248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.252275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.256011] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.256044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.256072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.259781] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.259988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.260021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.263791] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.263821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.263849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.267647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.267856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.268005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.272494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.272680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.272816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.277058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.277274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.277394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.281993] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.282205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.282358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.286970] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.287242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.287412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.292180] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.292446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.292573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.296869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.297112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.297241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.301715] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.301954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.302097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.306459] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.306658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.306836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.311331] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.311510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.311527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.315433] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.315467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.315495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.319356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.319388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.319415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.323196] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.323230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.323257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.327131] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.327164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.327192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.331313] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.331347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.331374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.335150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.335181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.335208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.338989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.339053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.339095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.343051] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.343101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.343130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.346913] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.346949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.346978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.350658] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.720 [2024-12-07 22:52:50.350927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.720 [2024-12-07 22:52:50.350947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.720 [2024-12-07 22:52:50.354803] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.354999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.355030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.358869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.359113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.359130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.363008] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.363043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.363085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.366830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.367028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.367061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.371168] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.371203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.371231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.374945] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.374979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.375007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.378720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.378951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.378969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.382935] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.382972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.383016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.386616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.386831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.386865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.390697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.390908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.390942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.394826] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.395039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.395070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.398610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.398639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.398673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.402472] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.402689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.402862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.406767] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.407029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.407227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.411357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.411548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.411841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.415863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.416098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.416272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.420365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.420548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.420681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.424605] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.424804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.424930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.428875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.429094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.429251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.433292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.433492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.433619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.437603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.437803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.437949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.441965] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.442133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.442165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.446152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.446329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.446461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.450340] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.450550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.450697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.454863] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.454926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.454941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.458631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.458686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.458699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.462594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.721 [2024-12-07 22:52:50.462627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.721 [2024-12-07 22:52:50.462655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.721 [2024-12-07 22:52:50.466468] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.722 [2024-12-07 22:52:50.466501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.722 [2024-12-07 22:52:50.466528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.722 [2024-12-07 22:52:50.470309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.722 [2024-12-07 22:52:50.470344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.722 [2024-12-07 22:52:50.470372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.722 [2024-12-07 22:52:50.474187] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.722 [2024-12-07 22:52:50.474220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.722 [2024-12-07 22:52:50.474247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.722 [2024-12-07 22:52:50.478059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.722 [2024-12-07 22:52:50.478094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.722 [2024-12-07 22:52:50.478106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.482220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.482256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.482285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.486291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.486323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.486351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.490511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.490545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.490572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.494454] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.494487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.494516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.498344] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.498378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.498405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.502169] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.502202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.502230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.505986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.506018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.506045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.509764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.509988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.510005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.513914] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.513948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.513976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.517674] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.517848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.522034] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.522070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.982 [2024-12-07 22:52:50.522098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.982 [2024-12-07 22:52:50.526354] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.982 [2024-12-07 22:52:50.526389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.526417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.530635] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.530692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.530721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.534932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.534969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.534998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.539612] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.539648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.539677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.544136] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.544173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.544217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.548607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.548640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.548668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.553076] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.553111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.553140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.557441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.557476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.557505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.561616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.561651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.561679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.565794] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.565828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.565857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.570062] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.570096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.570124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.574241] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.574275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.574303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.578177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.578210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.578238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.582038] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.582072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.582099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.586118] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.586167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.586194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.590009] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.590058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.590084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.593928] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.593977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.594004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.597834] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.597909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.597938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.601951] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.602001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.602028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.605942] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.605991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.606018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.609936] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.609985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.610012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.613960] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.614011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.614057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.618092] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.618142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.618170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.621994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.622043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.622071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.625857] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.625916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.629995] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.630032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.630045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.634098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.634148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.983 [2024-12-07 22:52:50.634175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.983 [2024-12-07 22:52:50.638042] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.983 [2024-12-07 22:52:50.638090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.638118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.984 [2024-12-07 22:52:50.642001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.984 [2024-12-07 22:52:50.642035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.642063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.984 [2024-12-07 22:52:50.646033] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.984 [2024-12-07 22:52:50.646082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.646110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.984 [2024-12-07 22:52:50.649992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.984 [2024-12-07 22:52:50.650040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.650068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.984 [2024-12-07 22:52:50.653968] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.984 [2024-12-07 22:52:50.654017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.654045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:35.984 [2024-12-07 22:52:50.657938] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.984 [2024-12-07 22:52:50.657989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.658001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:35.984 [2024-12-07 22:52:50.662113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.984 [2024-12-07 22:52:50.662147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.662175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:35.984 [2024-12-07 22:52:50.665994] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa68f50) 00:21:35.984 [2024-12-07 22:52:50.666043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.984 [2024-12-07 22:52:50.666070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.984 7688.00 IOPS, 961.00 MiB/s 00:21:35.984 Latency(us) 00:21:35.984 [2024-12-07T22:52:50.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.984 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:35.984 nvme0n1 : 2.00 7687.94 960.99 0.00 0.00 2078.35 1630.95 6285.50 00:21:35.984 [2024-12-07T22:52:50.750Z] =================================================================================================================== 00:21:35.984 [2024-12-07T22:52:50.750Z] Total : 7687.94 960.99 0.00 0.00 2078.35 1630.95 6285.50 00:21:35.984 { 00:21:35.984 "results": [ 00:21:35.984 { 00:21:35.984 "job": "nvme0n1", 00:21:35.984 "core_mask": "0x2", 00:21:35.984 "workload": "randread", 00:21:35.984 "status": "finished", 00:21:35.984 "queue_depth": 16, 00:21:35.984 "io_size": 131072, 00:21:35.984 "runtime": 2.002098, 00:21:35.984 "iops": 7687.935355811754, 00:21:35.984 "mibps": 960.9919194764692, 00:21:35.984 "io_failed": 0, 00:21:35.984 "io_timeout": 0, 00:21:35.984 "avg_latency_us": 2078.346475146475, 00:21:35.984 "min_latency_us": 1630.9527272727273, 00:21:35.984 "max_latency_us": 6285.498181818181 00:21:35.984 } 00:21:35.984 ], 00:21:35.984 "core_count": 1 00:21:35.984 } 00:21:35.984 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:35.984 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:35.984 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:35.984 | .driver_specific 00:21:35.984 | .nvme_error 00:21:35.984 | .status_code 00:21:35.984 | .command_transient_transport_error' 00:21:35.984 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 496 > 0 )) 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94575 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94575 ']' 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94575 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94575 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:36.243 killing process with pid 94575 00:21:36.243 Received shutdown signal, test time was about 2.000000 seconds 00:21:36.243 00:21:36.243 Latency(us) 00:21:36.243 [2024-12-07T22:52:51.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.243 [2024-12-07T22:52:51.009Z] =================================================================================================================== 00:21:36.243 [2024-12-07T22:52:51.009Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94575' 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94575 00:21:36.243 22:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94575 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94622 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94622 /var/tmp/bperf.sock 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94622 ']' 00:21:36.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:36.502 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:36.502 [2024-12-07 22:52:51.195490] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:36.502 [2024-12-07 22:52:51.195604] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94622 ] 00:21:36.764 [2024-12-07 22:52:51.332856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.764 [2024-12-07 22:52:51.364472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.764 [2024-12-07 22:52:51.390968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:36.764 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.764 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:36.764 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:36.764 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:37.022 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:37.022 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.022 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:37.022 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.022 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:37.022 22:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:37.279 nvme0n1 00:21:37.279 22:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:37.279 22:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.279 22:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:37.279 22:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.279 22:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:37.279 22:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:37.538 Running I/O for 2 seconds... 00:21:37.538 [2024-12-07 22:52:52.132092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fef90 00:21:37.538 [2024-12-07 22:52:52.134562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.134616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.146164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198feb58 00:21:37.538 [2024-12-07 22:52:52.148387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.148434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.159465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fe2e8 00:21:37.538 [2024-12-07 22:52:52.161698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.161745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.173035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fda78 00:21:37.538 [2024-12-07 22:52:52.175312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.175357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.186426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fd208 00:21:37.538 [2024-12-07 22:52:52.188622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.188667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.199891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fc998 00:21:37.538 [2024-12-07 22:52:52.202014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.202045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.213686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fc128 00:21:37.538 [2024-12-07 22:52:52.215882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.215937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.227095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fb8b8 00:21:37.538 [2024-12-07 22:52:52.229142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.229188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.240324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fb048 00:21:37.538 [2024-12-07 22:52:52.242420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.242464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.253644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fa7d8 00:21:37.538 [2024-12-07 22:52:52.255814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.255859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.267075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f9f68 00:21:37.538 [2024-12-07 22:52:52.269123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.269155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.280485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f96f8 00:21:37.538 [2024-12-07 22:52:52.282618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.282684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:37.538 [2024-12-07 22:52:52.293901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f8e88 00:21:37.538 [2024-12-07 22:52:52.295962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.538 [2024-12-07 22:52:52.296007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:37.797 [2024-12-07 22:52:52.308374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f8618 00:21:37.797 [2024-12-07 22:52:52.310444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.797 [2024-12-07 22:52:52.310474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:37.797 [2024-12-07 22:52:52.323619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f7da8 00:21:37.797 [2024-12-07 22:52:52.325865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.797 [2024-12-07 22:52:52.325936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:37.797 [2024-12-07 22:52:52.339681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f7538 00:21:37.797 [2024-12-07 22:52:52.341854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.797 [2024-12-07 22:52:52.341926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.353998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f6cc8 00:21:37.798 [2024-12-07 22:52:52.355995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.356026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.367307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f6458 00:21:37.798 [2024-12-07 22:52:52.369255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.369316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.380972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f5be8 00:21:37.798 [2024-12-07 22:52:52.382943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.382976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.394528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f5378 00:21:37.798 [2024-12-07 22:52:52.396507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.396551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.408024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f4b08 00:21:37.798 [2024-12-07 22:52:52.409883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.409927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.421341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f4298 00:21:37.798 [2024-12-07 22:52:52.423385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.423430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.434944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f3a28 00:21:37.798 [2024-12-07 22:52:52.436760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.436805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.448326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f31b8 00:21:37.798 [2024-12-07 22:52:52.450219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.450250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.461639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f2948 00:21:37.798 [2024-12-07 22:52:52.463587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.463632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.475274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f20d8 00:21:37.798 [2024-12-07 22:52:52.477122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.477167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.488527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f1868 00:21:37.798 [2024-12-07 22:52:52.490354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.490398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.501725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f0ff8 00:21:37.798 [2024-12-07 22:52:52.503599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.503643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.515134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f0788 00:21:37.798 [2024-12-07 22:52:52.516856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.516909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.528495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eff18 00:21:37.798 [2024-12-07 22:52:52.530261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.530306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.542112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ef6a8 00:21:37.798 [2024-12-07 22:52:52.543863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.543916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:37.798 [2024-12-07 22:52:52.555581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eee38 00:21:37.798 [2024-12-07 22:52:52.557433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:37.798 [2024-12-07 22:52:52.557480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.570310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ee5c8 00:21:38.058 [2024-12-07 22:52:52.572211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.572257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.584039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198edd58 00:21:38.058 [2024-12-07 22:52:52.585751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.585798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.597367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ed4e8 00:21:38.058 [2024-12-07 22:52:52.599132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.599163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.610650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ecc78 00:21:38.058 [2024-12-07 22:52:52.612395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.612440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.624118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ec408 00:21:38.058 [2024-12-07 22:52:52.625749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.625794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.637584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ebb98 00:21:38.058 [2024-12-07 22:52:52.639342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.639387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.651119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eb328 00:21:38.058 [2024-12-07 22:52:52.652713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.652759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.664422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eaab8 00:21:38.058 [2024-12-07 22:52:52.666025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.666055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.677955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ea248 00:21:38.058 [2024-12-07 22:52:52.679545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.679590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.691306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e99d8 00:21:38.058 [2024-12-07 22:52:52.692850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.692923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.704544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e9168 00:21:38.058 [2024-12-07 22:52:52.706106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.706153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.717830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e88f8 00:21:38.058 [2024-12-07 22:52:52.719410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.719455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.731335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e8088 00:21:38.058 [2024-12-07 22:52:52.732828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.732896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.744734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e7818 00:21:38.058 [2024-12-07 22:52:52.746294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.746338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.758087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e6fa8 00:21:38.058 [2024-12-07 22:52:52.759598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.759642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.771572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e6738 00:21:38.058 [2024-12-07 22:52:52.773137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.773185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.784942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e5ec8 00:21:38.058 [2024-12-07 22:52:52.786375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.786419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.798151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e5658 00:21:38.058 [2024-12-07 22:52:52.799643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.799688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:38.058 [2024-12-07 22:52:52.811611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e4de8 00:21:38.058 [2024-12-07 22:52:52.813096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.058 [2024-12-07 22:52:52.813140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.826076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e4578 00:21:38.319 [2024-12-07 22:52:52.827530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.827576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.839888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e3d08 00:21:38.319 [2024-12-07 22:52:52.841278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.841326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.853225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e3498 00:21:38.319 [2024-12-07 22:52:52.854586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.854632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.866418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e2c28 00:21:38.319 [2024-12-07 22:52:52.867798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.867843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.880190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e23b8 00:21:38.319 [2024-12-07 22:52:52.881504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.881550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.893655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e1b48 00:21:38.319 [2024-12-07 22:52:52.895066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.895110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.907448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e12d8 00:21:38.319 [2024-12-07 22:52:52.908822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.908892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.922288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e0a68 00:21:38.319 [2024-12-07 22:52:52.923704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.923751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.937922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e01f8 00:21:38.319 [2024-12-07 22:52:52.939463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.939510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.953321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198df988 00:21:38.319 [2024-12-07 22:52:52.954661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.954733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.967887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198df118 00:21:38.319 [2024-12-07 22:52:52.969190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.969252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.982190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198de8a8 00:21:38.319 [2024-12-07 22:52:52.983495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.983543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:52.996403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198de038 00:21:38.319 [2024-12-07 22:52:52.997654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:52.997700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:53.016453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198de038 00:21:38.319 [2024-12-07 22:52:53.018799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:53.018833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:53.030615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198de8a8 00:21:38.319 [2024-12-07 22:52:53.033022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:53.033068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:53.044964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198df118 00:21:38.319 [2024-12-07 22:52:53.047274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:53.047320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:53.059150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198df988 00:21:38.319 [2024-12-07 22:52:53.061342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:53.061388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:38.319 [2024-12-07 22:52:53.073290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e01f8 00:21:38.319 [2024-12-07 22:52:53.075606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.319 [2024-12-07 22:52:53.075653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.088659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e0a68 00:21:38.580 [2024-12-07 22:52:53.090952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.090988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.103429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e12d8 00:21:38.580 [2024-12-07 22:52:53.105668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.105714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:38.580 18218.00 IOPS, 71.16 MiB/s [2024-12-07T22:52:53.346Z] [2024-12-07 22:52:53.117318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e1b48 00:21:38.580 [2024-12-07 22:52:53.119498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.119544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.130757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e23b8 00:21:38.580 [2024-12-07 22:52:53.132931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.132976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.144145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e2c28 00:21:38.580 [2024-12-07 22:52:53.146217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.146278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.157755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e3498 00:21:38.580 [2024-12-07 22:52:53.159877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.159932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.171298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e3d08 00:21:38.580 [2024-12-07 22:52:53.173380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.173425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.184789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e4578 00:21:38.580 [2024-12-07 22:52:53.186900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.186930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.198166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e4de8 00:21:38.580 [2024-12-07 22:52:53.200160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.200204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.211573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e5658 00:21:38.580 [2024-12-07 22:52:53.213613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.213657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.225370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e5ec8 00:21:38.580 [2024-12-07 22:52:53.227422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.227466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.238774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e6738 00:21:38.580 [2024-12-07 22:52:53.240753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.240799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.252112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e6fa8 00:21:38.580 [2024-12-07 22:52:53.254085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.254117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.265444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e7818 00:21:38.580 [2024-12-07 22:52:53.267461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.267508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.278912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e8088 00:21:38.580 [2024-12-07 22:52:53.280772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.280817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.292271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e88f8 00:21:38.580 [2024-12-07 22:52:53.294173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.294217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.305556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e9168 00:21:38.580 [2024-12-07 22:52:53.307601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.307646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:38.580 [2024-12-07 22:52:53.319087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198e99d8 00:21:38.580 [2024-12-07 22:52:53.320937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.580 [2024-12-07 22:52:53.320966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:38.581 [2024-12-07 22:52:53.332505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ea248 00:21:38.581 [2024-12-07 22:52:53.334526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.581 [2024-12-07 22:52:53.334555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:38.840 [2024-12-07 22:52:53.348601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eaab8 00:21:38.840 [2024-12-07 22:52:53.350830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.350868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.364429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eb328 00:21:38.841 [2024-12-07 22:52:53.366357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.366387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.379156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ebb98 00:21:38.841 [2024-12-07 22:52:53.381193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.381224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.392838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ec408 00:21:38.841 [2024-12-07 22:52:53.394637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.394688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.406199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ecc78 00:21:38.841 [2024-12-07 22:52:53.407976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.408007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.419484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ed4e8 00:21:38.841 [2024-12-07 22:52:53.421207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.421237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.432883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198edd58 00:21:38.841 [2024-12-07 22:52:53.434630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.434660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.446393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ee5c8 00:21:38.841 [2024-12-07 22:52:53.448191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.448222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.460499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eee38 00:21:38.841 [2024-12-07 22:52:53.462214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.462245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.474531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198ef6a8 00:21:38.841 [2024-12-07 22:52:53.476400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.476431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.488273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198eff18 00:21:38.841 [2024-12-07 22:52:53.489894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.489950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.501659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f0788 00:21:38.841 [2024-12-07 22:52:53.503399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.503568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.515315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f0ff8 00:21:38.841 [2024-12-07 22:52:53.517041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.517219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.529283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f1868 00:21:38.841 [2024-12-07 22:52:53.531100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.531278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.543438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f20d8 00:21:38.841 [2024-12-07 22:52:53.545171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.545349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.557420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f2948 00:21:38.841 [2024-12-07 22:52:53.559176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.559353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.571259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f31b8 00:21:38.841 [2024-12-07 22:52:53.572906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.585130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f3a28 00:21:38.841 [2024-12-07 22:52:53.586810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.587010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:38.841 [2024-12-07 22:52:53.598971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f4298 00:21:38.841 [2024-12-07 22:52:53.600844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:38.841 [2024-12-07 22:52:53.601086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.614185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f4b08 00:21:39.101 [2024-12-07 22:52:53.615866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.616075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.628038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f5378 00:21:39.101 [2024-12-07 22:52:53.629519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.629550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.641367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f5be8 00:21:39.101 [2024-12-07 22:52:53.642851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.642911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.654618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f6458 00:21:39.101 [2024-12-07 22:52:53.656187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.656219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.668070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f6cc8 00:21:39.101 [2024-12-07 22:52:53.669602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.669632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.681915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f7538 00:21:39.101 [2024-12-07 22:52:53.683357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.683386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.695151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f7da8 00:21:39.101 [2024-12-07 22:52:53.696730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.696760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:39.101 [2024-12-07 22:52:53.708636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f8618 00:21:39.101 [2024-12-07 22:52:53.710065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.101 [2024-12-07 22:52:53.710094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.721934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f8e88 00:21:39.102 [2024-12-07 22:52:53.723371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:23173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.723530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.735702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f96f8 00:21:39.102 [2024-12-07 22:52:53.737135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.737167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.749047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f9f68 00:21:39.102 [2024-12-07 22:52:53.750392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.750422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.762347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fa7d8 00:21:39.102 [2024-12-07 22:52:53.763695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.763725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.775816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fb048 00:21:39.102 [2024-12-07 22:52:53.777393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.777420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.789486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fb8b8 00:21:39.102 [2024-12-07 22:52:53.790777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.790810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.802817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fc128 00:21:39.102 [2024-12-07 22:52:53.804127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.804156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.816109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fc998 00:21:39.102 [2024-12-07 22:52:53.817369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.817399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.829442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fd208 00:21:39.102 [2024-12-07 22:52:53.830855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.830905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.842933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fda78 00:21:39.102 [2024-12-07 22:52:53.844174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.844204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:39.102 [2024-12-07 22:52:53.856129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fe2e8 00:21:39.102 [2024-12-07 22:52:53.857326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.102 [2024-12-07 22:52:53.857355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.870924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198feb58 00:21:39.362 [2024-12-07 22:52:53.872211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.872242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.890324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fef90 00:21:39.362 [2024-12-07 22:52:53.892786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.892817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.903873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198feb58 00:21:39.362 [2024-12-07 22:52:53.906084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.906114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.917170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fe2e8 00:21:39.362 [2024-12-07 22:52:53.919385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.919544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.930753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fda78 00:21:39.362 [2024-12-07 22:52:53.933085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.933115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.944203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fd208 00:21:39.362 [2024-12-07 22:52:53.946275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.946304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.957439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fc998 00:21:39.362 [2024-12-07 22:52:53.959610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.959639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.970777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fc128 00:21:39.362 [2024-12-07 22:52:53.972989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.973015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.984364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fb8b8 00:21:39.362 [2024-12-07 22:52:53.986441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.986471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:53.997665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fb048 00:21:39.362 [2024-12-07 22:52:53.999914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:53.999940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.011183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198fa7d8 00:21:39.362 [2024-12-07 22:52:54.013205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.013236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.024567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f9f68 00:21:39.362 [2024-12-07 22:52:54.026604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.026633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.038083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f96f8 00:21:39.362 [2024-12-07 22:52:54.040100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.040130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.051264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f8e88 00:21:39.362 [2024-12-07 22:52:54.053518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.053548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.064747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f8618 00:21:39.362 [2024-12-07 22:52:54.066860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.066916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.078097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f7da8 00:21:39.362 [2024-12-07 22:52:54.080069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.080098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.091350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f7538 00:21:39.362 [2024-12-07 22:52:54.093248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.093277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:39.362 [2024-12-07 22:52:54.105347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b210) with pdu=0x2000198f6cc8 00:21:39.362 [2024-12-07 22:52:54.107462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:39.362 [2024-12-07 22:52:54.107627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:39.362 18407.00 IOPS, 71.90 MiB/s 00:21:39.362 Latency(us) 00:21:39.362 [2024-12-07T22:52:54.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.362 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:39.362 nvme0n1 : 2.01 18423.76 71.97 0.00 0.00 6941.76 4617.31 26810.18 00:21:39.362 [2024-12-07T22:52:54.128Z] =================================================================================================================== 00:21:39.362 [2024-12-07T22:52:54.128Z] Total : 18423.76 71.97 0.00 0.00 6941.76 4617.31 26810.18 00:21:39.362 { 00:21:39.362 "results": [ 00:21:39.362 { 00:21:39.362 "job": "nvme0n1", 00:21:39.362 "core_mask": "0x2", 00:21:39.362 "workload": "randwrite", 00:21:39.362 "status": "finished", 00:21:39.362 "queue_depth": 128, 00:21:39.362 "io_size": 4096, 00:21:39.362 "runtime": 2.005128, 00:21:39.362 "iops": 18423.761475576623, 00:21:39.362 "mibps": 71.96781826397118, 00:21:39.362 "io_failed": 0, 00:21:39.362 "io_timeout": 0, 00:21:39.362 "avg_latency_us": 6941.755676761115, 00:21:39.362 "min_latency_us": 4617.309090909091, 00:21:39.362 "max_latency_us": 26810.18181818182 00:21:39.362 } 00:21:39.362 ], 00:21:39.362 "core_count": 1 00:21:39.362 } 00:21:39.622 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:39.622 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:39.622 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:39.622 | .driver_specific 00:21:39.622 | .nvme_error 00:21:39.622 | .status_code 00:21:39.622 | .command_transient_transport_error' 00:21:39.622 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 144 > 0 )) 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94622 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94622 ']' 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94622 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94622 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:39.882 killing process with pid 94622 00:21:39.882 Received shutdown signal, test time was about 2.000000 seconds 00:21:39.882 00:21:39.882 Latency(us) 00:21:39.882 [2024-12-07T22:52:54.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.882 [2024-12-07T22:52:54.648Z] =================================================================================================================== 00:21:39.882 [2024-12-07T22:52:54.648Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94622' 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94622 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94622 00:21:39.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94676 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94676 /var/tmp/bperf.sock 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94676 ']' 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.882 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.141 [2024-12-07 22:52:54.674430] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:40.141 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.141 Zero copy mechanism will not be used. 00:21:40.141 [2024-12-07 22:52:54.675305] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94676 ] 00:21:40.141 [2024-12-07 22:52:54.811169] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.141 [2024-12-07 22:52:54.843157] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.141 [2024-12-07 22:52:54.869400] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.401 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.401 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:40.401 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:40.401 22:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:40.659 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:40.659 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.659 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.659 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.659 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.659 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.917 nvme0n1 00:21:40.917 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:40.917 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.917 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.917 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.917 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:40.917 22:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:40.917 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.917 Zero copy mechanism will not be used. 00:21:40.917 Running I/O for 2 seconds... 00:21:40.917 [2024-12-07 22:52:55.648324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.917 [2024-12-07 22:52:55.648670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.917 [2024-12-07 22:52:55.648701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.917 [2024-12-07 22:52:55.653125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.917 [2024-12-07 22:52:55.653473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.917 [2024-12-07 22:52:55.653510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.917 [2024-12-07 22:52:55.657761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.917 [2024-12-07 22:52:55.658110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.918 [2024-12-07 22:52:55.658149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.918 [2024-12-07 22:52:55.662336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.918 [2024-12-07 22:52:55.662695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.918 [2024-12-07 22:52:55.662748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:40.918 [2024-12-07 22:52:55.666924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.918 [2024-12-07 22:52:55.667285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.918 [2024-12-07 22:52:55.667321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:40.918 [2024-12-07 22:52:55.671470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.918 [2024-12-07 22:52:55.671807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.918 [2024-12-07 22:52:55.671841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:40.918 [2024-12-07 22:52:55.676150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.918 [2024-12-07 22:52:55.676475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.918 [2024-12-07 22:52:55.676511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:40.918 [2024-12-07 22:52:55.681293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:40.918 [2024-12-07 22:52:55.681700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.918 [2024-12-07 22:52:55.681740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.686537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.686916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.686959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.691124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.691457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.691492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.695808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.696159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.696197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.700479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.700803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.700837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.705106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.705443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.705475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.709721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.710075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.710107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.714386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.714755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.719287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.719613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.719646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.723962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.724288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.724337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.728643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.728982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.729006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.733424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.733750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.733783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.177 [2024-12-07 22:52:55.738190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.177 [2024-12-07 22:52:55.738532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-12-07 22:52:55.738566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.742640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.743013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.743072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.747301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.747638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.747669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.751927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.752264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.752295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.756605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.756946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.756990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.761243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.761594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.761627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.765794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.766157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.766194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.770332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.770694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.770725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.774958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.775321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.775356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.779598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.779934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.779973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.784409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.784737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.784771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.789040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.789365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.789399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.793687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.794026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.794060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.798296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.798621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.798654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.803068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.803391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.803423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.807762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.808123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.808160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.812517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.812855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.812896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.817190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.817534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.817569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.821936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.822245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.822275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.826650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.827079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.827110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.831543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.831887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.831911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.836430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.836773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.836810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.841213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.841554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.841592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.845852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.846208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.846243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.850523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.850890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.850936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.855369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.855698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.855734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.859951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.860290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.860327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-12-07 22:52:55.864565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.178 [2024-12-07 22:52:55.864900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-12-07 22:52:55.864940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.869332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.869679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.869711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.874188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.874547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.874584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.878907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.879303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.879342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.883751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.884130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.888525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.888851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.893202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.893531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.893564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.897865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.898208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.898240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.902463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.902820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.902844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.907217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.907542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.907575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.911870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.912214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.912247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.916493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.916827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.916860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.921103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.921424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.921460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.925644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.925993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.926025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.930273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.930624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.930655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.934956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.935326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.935364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.179 [2024-12-07 22:52:55.940058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.179 [2024-12-07 22:52:55.940420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.179 [2024-12-07 22:52:55.940487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.944943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.945340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.945380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.949763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.950115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.950147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.954368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.954714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.954746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.959101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.959435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.959467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.963641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.963974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.964013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.968379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.968704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.968736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.973058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.973408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.973441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.977801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.978157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.978195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.982493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.982848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.982891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.987205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.987531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.987567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.991838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.992189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.992228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:55.996499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:55.996824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:55.996855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:56.001199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:56.001522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:56.001570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:56.005860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:56.006213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:56.006257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:56.010504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:56.010860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:56.010902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:56.015161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:56.015507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:56.015539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:56.019909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:56.020256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:56.020297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:56.024639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.439 [2024-12-07 22:52:56.024996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.439 [2024-12-07 22:52:56.025027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.439 [2024-12-07 22:52:56.029377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.029705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.029740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.034091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.034438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.034482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.038818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.039196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.039232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.043450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.043776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.043810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.048171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.048487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.048523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.052775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.053109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.053140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.057405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.057730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.057764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.062183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.062529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.062561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.066867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.067238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.067275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.071544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.071870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.076285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.076634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.076666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.080961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.081287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.081319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.085637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.085975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.086005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.090367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.090718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.090750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.095073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.095401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.095434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.099688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.100028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.100068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.104411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.104736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.104767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.109121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.109447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.109481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.113730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.114071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.114103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.118389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.118738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.118773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.122939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.123275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.123306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.127647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.127971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.128008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.132383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.132708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.132746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.137102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.137428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.137461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.141704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.142037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.142064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.146388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.146728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.146762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.151019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.440 [2024-12-07 22:52:56.151379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-12-07 22:52:56.151411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-12-07 22:52:56.155640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.155965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.155999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.160289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.160613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.160649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.164952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.165285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.165324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.169614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.169953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.169991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.175113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.175513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.175552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.181045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.181350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.181384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.186587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.187014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.187081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.191958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.192405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.192443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.196720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.197076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.197108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-12-07 22:52:56.201820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.441 [2024-12-07 22:52:56.202224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-12-07 22:52:56.202264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.700 [2024-12-07 22:52:56.206804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.700 [2024-12-07 22:52:56.207181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.700 [2024-12-07 22:52:56.207233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.700 [2024-12-07 22:52:56.211644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.700 [2024-12-07 22:52:56.211981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.700 [2024-12-07 22:52:56.212019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.700 [2024-12-07 22:52:56.216372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.700 [2024-12-07 22:52:56.216702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.700 [2024-12-07 22:52:56.216734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.700 [2024-12-07 22:52:56.221045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.700 [2024-12-07 22:52:56.221373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.700 [2024-12-07 22:52:56.221404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.700 [2024-12-07 22:52:56.225519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.700 [2024-12-07 22:52:56.225852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.225897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.230681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.231022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.231066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.236720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.237040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.237070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.242821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.243144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.243173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.248681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.249030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.249059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.253559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.253880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.253929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.258077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.258406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.258441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.262614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.262982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.263018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.267282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.267620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.267655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.271889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.272279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.276430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.276787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.276823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.281088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.281415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.281451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.285828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.286166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.286202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.290385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.290740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.290777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.294865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.295229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.295264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.299515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.299842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.299887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.304114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.304449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.304482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.308599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.308936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.308987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.313294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.313622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.313656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.317994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.318321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.318357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.322588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.322960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.322998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.327326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.327663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.327694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.331843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.332222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.332258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.336441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.336779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.336824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.341019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.341352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.341383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.345567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.345903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.350139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.350465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.701 [2024-12-07 22:52:56.350502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.701 [2024-12-07 22:52:56.354690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.701 [2024-12-07 22:52:56.355028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.355064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.359417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.359741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.359774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.364083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.364398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.364433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.368530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.368864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.368907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.373235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.373589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.373624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.377965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.378295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.378327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.382653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.383043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.383094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.387371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.387696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.387731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.392035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.392360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.392390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.396596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.396930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.396954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.401648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.402020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.402058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.406587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.406966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.406993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.411759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.412129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.412183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.417109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.417520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.417556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.422537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.422909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.422957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.427575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.427939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.427985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.432768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.433152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.433191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.437860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.438236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.438285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.442597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.442968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.443001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.447293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.447608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.447641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.451970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.452304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.452337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.456494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.456827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.456859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.702 [2024-12-07 22:52:56.461523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.702 [2024-12-07 22:52:56.461911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.702 [2024-12-07 22:52:56.461984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.466721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.467095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.467133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.471746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.472093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.472127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.476401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.476754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.476797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.481581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.481925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.481966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.486653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.487063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.487116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.491839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.492249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.492302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.496969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.497323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.497361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.502148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.502551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.502588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.507296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.507643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.507677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.512162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.512522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.512559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.516910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.517235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.517280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.521629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.521985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.522019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.526545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.526897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.526942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.531352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.531684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.531715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.536176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.536507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.536539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.540972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.541326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.541368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.545846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.546205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.546243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.981 [2024-12-07 22:52:56.550511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.981 [2024-12-07 22:52:56.550898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.981 [2024-12-07 22:52:56.550940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.555310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.555641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.555673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.560035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.560388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.560425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.564789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.565144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.565186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.569496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.569840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.569896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.574226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.574571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.574614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.579026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.579407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.579444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.583966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.584294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.584325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.588651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.589015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.589072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.593406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.593747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.593790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.598229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.598620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.598658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.603168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.603541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.603578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.607885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.608237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.608278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.612555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.612900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.612944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.617528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.617868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.617908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.622223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.622567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.622603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.626938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.627310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.627346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.631677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.632032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.632065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.636669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.637023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.637057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.982 6460.00 IOPS, 807.50 MiB/s [2024-12-07T22:52:56.748Z] [2024-12-07 22:52:56.642500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.642877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.642920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.647296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.647652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.647689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.652083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.652428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.652456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.656809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.657178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.657216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.661636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.661988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.662020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.666430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.666785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.666817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.671251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.671586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.671618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.676007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.676347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.982 [2024-12-07 22:52:56.676380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.982 [2024-12-07 22:52:56.681061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.982 [2024-12-07 22:52:56.681412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.681448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.685954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.686308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.686345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.690481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.690833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.690865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.695044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.695404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.695440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.699728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.700069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.700099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.704407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.704732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.704760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.709082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.709408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.709441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.713711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.714050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.714080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.718488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.718891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.723791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.724190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.724230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.983 [2024-12-07 22:52:56.729245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:41.983 [2024-12-07 22:52:56.729602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.983 [2024-12-07 22:52:56.729642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.734736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.735146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.735186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.740365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.740751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.740791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.745493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.745842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.745915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.750629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.751010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.751049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.756071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.756456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.756495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.761549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.761936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.761976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.766742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.767140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.767179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.771633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.771969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.772017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.776350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.776687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.776719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.781186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.781509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.781541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.785946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.786279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.786311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.790722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.791080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.791117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.795377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.795700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.795734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.800058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.800382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.800415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.804675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.805011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.805044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.809404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.809735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.809766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.814065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.814392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.814424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.818639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.819015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.819052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.264 [2024-12-07 22:52:56.823379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.264 [2024-12-07 22:52:56.823703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.264 [2024-12-07 22:52:56.823734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.828022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.828346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.828377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.832643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.832980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.833022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.837355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.837687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.837719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.841977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.842321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.842363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.846523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.846898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.846947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.851344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.851681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.851725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.856084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.856417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.856449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.860776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.861112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.861144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.865318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.865642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.865676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.870160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.870501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.870534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.874798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.875144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.875179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.879453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.879776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.879810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.884132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.884458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.884490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.888771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.889111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.889149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.893511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.893837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.893879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.898027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.898366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.898400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.902515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.902865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.902909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.907198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.907563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.907603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.911725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.912073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.912104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.916352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.916690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.916722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.921023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.921356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.921387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.925673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.926045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.926082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.930291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.930617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.930650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.935024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.935363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.935401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.939569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.939891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.939938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.944056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.265 [2024-12-07 22:52:56.944389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.265 [2024-12-07 22:52:56.944429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.265 [2024-12-07 22:52:56.948570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.948907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.948927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.953260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.953588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.953621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.957848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.958189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.958223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.962407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.962760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.962793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.967101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.967450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.967485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.971618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.971961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.972006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.976298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.976631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.976665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.980953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.981286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.981318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.985471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.985809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.985840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.989982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.990317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.990348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.994367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.994730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.994761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:56.998966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:56.999326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:56.999363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:57.003481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:57.003813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:57.003847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:57.008006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:57.008342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:57.008381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:57.012530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:57.012863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:57.012903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:57.017112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:57.017506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:57.017544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.266 [2024-12-07 22:52:57.022563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.266 [2024-12-07 22:52:57.022929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.266 [2024-12-07 22:52:57.022969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.027784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.027880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.027935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.033099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.033213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.033250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.038216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.038295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.038316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.043277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.043353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.043373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.047891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.048001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.048021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.052592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.052686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.052707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.057259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.057353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.057374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.061770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.061862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.061882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.066311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.536 [2024-12-07 22:52:57.066405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.536 [2024-12-07 22:52:57.066425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.536 [2024-12-07 22:52:57.070794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.070875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.070897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.075409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.075500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.075520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.079857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.079976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.079996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.084439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.084529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.084549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.088976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.089067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.089087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.093480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.093576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.093596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.098018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.098108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.098129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.102500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.102593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.102614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.107163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.107273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.107294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.111793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.111884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.111905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.116343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.116438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.116458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.120793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.120888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.120919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.125294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.125385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.125406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.129800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.129893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.129923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.134334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.134428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.134448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.139108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.139216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.139252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.143758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.143854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.143874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.148287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.148382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.148402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.152780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.152872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.152892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.157295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.157386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.157406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.161813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.161924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.161944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.166239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.166334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.166354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.170723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.170821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.170843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.175267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.175358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.175379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.179728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.179821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.179841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.184383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.184486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.184506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.188863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.537 [2024-12-07 22:52:57.188965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.537 [2024-12-07 22:52:57.188985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.537 [2024-12-07 22:52:57.193365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.193459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.193478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.197817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.197922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.197943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.202269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.202358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.202378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.206779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.206880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.206901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.211358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.211452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.211473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.215847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.215968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.215988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.220360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.220450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.220470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.224830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.224931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.224952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.229318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.229412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.229432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.234021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.234121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.234143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.239575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.239716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.244701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.244792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.244813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.249629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.249722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.249742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.254343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.254434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.254454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.258965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.259096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.263442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.263532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.263552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.267890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.267990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.268009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.272392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.272485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.272504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.276756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.276850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.276870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.281360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.281454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.281474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.285832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.285941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.285961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.290269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.290360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.290379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.538 [2024-12-07 22:52:57.294782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.538 [2024-12-07 22:52:57.294876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.538 [2024-12-07 22:52:57.294897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.300027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.300105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.300125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.304716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.304822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.304841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.309521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.309610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.309631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.314060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.314150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.314170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.318447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.318536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.318556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.323022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.323099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.323119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.327536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.327630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.327650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.332047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.332137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.332157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.336618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.336709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.336729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.341181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.341272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.341307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.345672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.345762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.345782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.350153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.350243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.350263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.354556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.354647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.354692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.359156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.359250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.359269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.363650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.363743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.363763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.368187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.368281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.368300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.372614] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.372702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.372721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.377081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.377172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.377192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.381563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.381653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.381673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.386123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.386214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.798 [2024-12-07 22:52:57.386234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.798 [2024-12-07 22:52:57.390596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.798 [2024-12-07 22:52:57.390726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.390746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.395199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.395291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.395311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.399624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.399716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.399735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.404179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.404270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.404289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.408622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.408715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.408735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.413160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.413253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.413288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.417573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.417668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.417688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.422163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.422255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.422275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.427146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.427239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.427259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.431983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.432075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.432096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.437101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.437185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.437220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.442450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.442541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.442563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.447560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.447651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.447671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.452631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.452724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.452744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.457538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.457632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.457652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.462397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.462491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.462510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.467210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.467300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.467320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.471675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.471768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.471787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.476200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.476306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.476326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.480698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.480789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.480809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.485429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.485520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.485540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.489932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.490023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.490043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.494423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.494513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.494532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.498919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.499044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.499079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.503456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.503546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.503566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.507831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.507958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.507979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.512440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.799 [2024-12-07 22:52:57.512529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.799 [2024-12-07 22:52:57.512548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.799 [2024-12-07 22:52:57.516867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.516989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.517023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.521320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.521411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.521430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.525834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.525935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.525955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.530392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.530483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.530502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.534833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.534941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.534963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.539372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.539462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.539482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.543866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.543986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.544007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.548377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.548469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.548488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.552824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.552939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.552959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.800 [2024-12-07 22:52:57.557521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:42.800 [2024-12-07 22:52:57.557599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.800 [2024-12-07 22:52:57.557619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.562808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.562878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.562912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.567707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.567814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.567833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.572252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.572359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.572379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.576747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.576839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.576858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.581330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.581422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.581442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.585856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.585957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.585976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.590258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.590348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.590368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.594695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.594796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.594817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.599246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.599337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.599357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.603631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.603722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.603742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.608166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.608258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.608293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.612589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.612682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.612701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.617106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.617196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.617215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.621570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.621660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.621681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.626069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.626157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.626177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.630533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.630628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.630648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.059 [2024-12-07 22:52:57.635150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.635257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.635276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.059 6552.00 IOPS, 819.00 MiB/s [2024-12-07T22:52:57.825Z] [2024-12-07 22:52:57.640434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x212b550) with pdu=0x2000198fef90 00:21:43.059 [2024-12-07 22:52:57.640506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.059 [2024-12-07 22:52:57.640526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.059 00:21:43.059 Latency(us) 00:21:43.059 [2024-12-07T22:52:57.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.059 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:43.059 nvme0n1 : 2.00 6551.29 818.91 0.00 0.00 2437.01 1645.85 7685.59 00:21:43.059 [2024-12-07T22:52:57.825Z] =================================================================================================================== 00:21:43.059 [2024-12-07T22:52:57.825Z] Total : 6551.29 818.91 0.00 0.00 2437.01 1645.85 7685.59 00:21:43.059 { 00:21:43.059 "results": [ 00:21:43.059 { 00:21:43.059 "job": "nvme0n1", 00:21:43.059 "core_mask": "0x2", 00:21:43.060 "workload": "randwrite", 00:21:43.060 "status": "finished", 00:21:43.060 "queue_depth": 16, 00:21:43.060 "io_size": 131072, 00:21:43.060 "runtime": 2.00388, 00:21:43.060 "iops": 6551.290496436913, 00:21:43.060 "mibps": 818.9113120546141, 00:21:43.060 "io_failed": 0, 00:21:43.060 "io_timeout": 0, 00:21:43.060 "avg_latency_us": 2437.0051254778127, 00:21:43.060 "min_latency_us": 1645.8472727272726, 00:21:43.060 "max_latency_us": 7685.585454545455 00:21:43.060 } 00:21:43.060 ], 00:21:43.060 "core_count": 1 00:21:43.060 } 00:21:43.060 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:43.060 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:43.060 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:43.060 | .driver_specific 00:21:43.060 | .nvme_error 00:21:43.060 | .status_code 00:21:43.060 | .command_transient_transport_error' 00:21:43.060 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 423 > 0 )) 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94676 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94676 ']' 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94676 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94676 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:43.319 killing process with pid 94676 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94676' 00:21:43.319 Received shutdown signal, test time was about 2.000000 seconds 00:21:43.319 00:21:43.319 Latency(us) 00:21:43.319 [2024-12-07T22:52:58.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.319 [2024-12-07T22:52:58.085Z] =================================================================================================================== 00:21:43.319 [2024-12-07T22:52:58.085Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94676 00:21:43.319 22:52:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94676 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94491 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94491 ']' 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94491 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94491 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:43.578 killing process with pid 94491 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94491' 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94491 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94491 00:21:43.578 00:21:43.578 real 0m15.177s 00:21:43.578 user 0m29.722s 00:21:43.578 sys 0m4.308s 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.578 ************************************ 00:21:43.578 END TEST nvmf_digest_error 00:21:43.578 ************************************ 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:43.578 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:43.837 rmmod nvme_tcp 00:21:43.837 rmmod nvme_fabrics 00:21:43.837 rmmod nvme_keyring 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 94491 ']' 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 94491 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 94491 ']' 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 94491 00:21:43.837 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (94491) - No such process 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 94491 is not found' 00:21:43.837 Process with pid 94491 is not found 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:43.837 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:43.838 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:44.097 00:21:44.097 real 0m31.539s 00:21:44.097 user 1m0.069s 00:21:44.097 sys 0m8.939s 00:21:44.097 ************************************ 00:21:44.097 END TEST nvmf_digest 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:44.097 ************************************ 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.097 ************************************ 00:21:44.097 START TEST nvmf_host_multipath 00:21:44.097 ************************************ 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:44.097 * Looking for test storage... 00:21:44.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:21:44.097 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.357 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:44.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.358 --rc genhtml_branch_coverage=1 00:21:44.358 --rc genhtml_function_coverage=1 00:21:44.358 --rc genhtml_legend=1 00:21:44.358 --rc geninfo_all_blocks=1 00:21:44.358 --rc geninfo_unexecuted_blocks=1 00:21:44.358 00:21:44.358 ' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:44.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.358 --rc genhtml_branch_coverage=1 00:21:44.358 --rc genhtml_function_coverage=1 00:21:44.358 --rc genhtml_legend=1 00:21:44.358 --rc geninfo_all_blocks=1 00:21:44.358 --rc geninfo_unexecuted_blocks=1 00:21:44.358 00:21:44.358 ' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:44.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.358 --rc genhtml_branch_coverage=1 00:21:44.358 --rc genhtml_function_coverage=1 00:21:44.358 --rc genhtml_legend=1 00:21:44.358 --rc geninfo_all_blocks=1 00:21:44.358 --rc geninfo_unexecuted_blocks=1 00:21:44.358 00:21:44.358 ' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:44.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.358 --rc genhtml_branch_coverage=1 00:21:44.358 --rc genhtml_function_coverage=1 00:21:44.358 --rc genhtml_legend=1 00:21:44.358 --rc geninfo_all_blocks=1 00:21:44.358 --rc geninfo_unexecuted_blocks=1 00:21:44.358 00:21:44.358 ' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:44.358 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:44.358 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:44.359 Cannot find device "nvmf_init_br" 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:44.359 22:52:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:44.359 Cannot find device "nvmf_init_br2" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:44.359 Cannot find device "nvmf_tgt_br" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:44.359 Cannot find device "nvmf_tgt_br2" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:44.359 Cannot find device "nvmf_init_br" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:44.359 Cannot find device "nvmf_init_br2" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:44.359 Cannot find device "nvmf_tgt_br" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:44.359 Cannot find device "nvmf_tgt_br2" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:44.359 Cannot find device "nvmf_br" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:44.359 Cannot find device "nvmf_init_if" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:44.359 Cannot find device "nvmf_init_if2" 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:44.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:44.359 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:44.359 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:44.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:44.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:21:44.618 00:21:44.618 --- 10.0.0.3 ping statistics --- 00:21:44.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.618 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:44.618 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:44.618 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:21:44.618 00:21:44.618 --- 10.0.0.4 ping statistics --- 00:21:44.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.618 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:21:44.618 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:44.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:44.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:44.618 00:21:44.618 --- 10.0.0.1 ping statistics --- 00:21:44.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.618 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:44.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:44.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:21:44.619 00:21:44.619 --- 10.0.0.2 ping statistics --- 00:21:44.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:44.619 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=94982 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 94982 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94982 ']' 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.619 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:44.878 [2024-12-07 22:52:59.439308] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:21:44.878 [2024-12-07 22:52:59.439409] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.878 [2024-12-07 22:52:59.577757] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:44.878 [2024-12-07 22:52:59.622913] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.878 [2024-12-07 22:52:59.622975] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.878 [2024-12-07 22:52:59.622989] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.878 [2024-12-07 22:52:59.622999] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.878 [2024-12-07 22:52:59.623008] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.878 [2024-12-07 22:52:59.623173] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.878 [2024-12-07 22:52:59.623341] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.139 [2024-12-07 22:52:59.660821] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94982 00:21:45.139 22:52:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:45.399 [2024-12-07 22:53:00.042083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.399 22:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:45.658 Malloc0 00:21:45.658 22:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:45.919 22:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:46.178 22:53:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:46.440 [2024-12-07 22:53:01.028049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:46.440 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:46.699 [2024-12-07 22:53:01.244104] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95025 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95025 /var/tmp/bdevperf.sock 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95025 ']' 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.699 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:46.957 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:46.957 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:46.957 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:47.213 22:53:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:47.470 Nvme0n1 00:21:47.470 22:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:47.728 Nvme0n1 00:21:47.987 22:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:47.987 22:53:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:48.922 22:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:48.922 22:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:49.181 22:53:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:49.439 22:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:49.439 22:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95068 00:21:49.439 22:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:49.439 22:53:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94982 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.003 Attaching 4 probes... 00:21:56.003 @path[10.0.0.3, 4421]: 20389 00:21:56.003 @path[10.0.0.3, 4421]: 21044 00:21:56.003 @path[10.0.0.3, 4421]: 20968 00:21:56.003 @path[10.0.0.3, 4421]: 21260 00:21:56.003 @path[10.0.0.3, 4421]: 20960 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95068 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:56.003 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:21:56.263 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:21:56.263 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95183 00:21:56.263 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:21:56.263 22:53:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94982 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:02.834 22:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:02.834 22:53:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:02.834 Attaching 4 probes... 00:22:02.834 @path[10.0.0.3, 4420]: 20455 00:22:02.834 @path[10.0.0.3, 4420]: 20794 00:22:02.834 @path[10.0.0.3, 4420]: 20511 00:22:02.834 @path[10.0.0.3, 4420]: 20606 00:22:02.834 @path[10.0.0.3, 4420]: 20641 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95183 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:02.834 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:03.094 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:03.094 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94982 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:03.094 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95295 00:22:03.094 22:53:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:09.665 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:09.665 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:09.665 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:09.665 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:09.665 Attaching 4 probes... 00:22:09.665 @path[10.0.0.3, 4421]: 15534 00:22:09.665 @path[10.0.0.3, 4421]: 20528 00:22:09.665 @path[10.0.0.3, 4421]: 20605 00:22:09.665 @path[10.0.0.3, 4421]: 20650 00:22:09.665 @path[10.0.0.3, 4421]: 20820 00:22:09.665 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95295 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:09.666 22:53:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:09.666 22:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:09.924 22:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:09.924 22:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95413 00:22:09.924 22:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:09.924 22:53:24 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94982 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:16.488 Attaching 4 probes... 00:22:16.488 00:22:16.488 00:22:16.488 00:22:16.488 00:22:16.488 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95413 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:16.488 22:53:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:16.488 22:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:16.747 22:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:16.747 22:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94982 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:16.747 22:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95531 00:22:16.747 22:53:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:23.313 Attaching 4 probes... 00:22:23.313 @path[10.0.0.3, 4421]: 20088 00:22:23.313 @path[10.0.0.3, 4421]: 20531 00:22:23.313 @path[10.0.0.3, 4421]: 20547 00:22:23.313 @path[10.0.0.3, 4421]: 20222 00:22:23.313 @path[10.0.0.3, 4421]: 20305 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95531 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:23.313 22:53:37 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:24.251 22:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:24.251 22:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95651 00:22:24.251 22:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94982 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:24.251 22:53:38 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:30.859 22:53:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:30.859 22:53:44 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.859 Attaching 4 probes... 00:22:30.859 @path[10.0.0.3, 4420]: 19378 00:22:30.859 @path[10.0.0.3, 4420]: 19820 00:22:30.859 @path[10.0.0.3, 4420]: 19853 00:22:30.859 @path[10.0.0.3, 4420]: 19993 00:22:30.859 @path[10.0.0.3, 4420]: 20316 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95651 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:30.859 [2024-12-07 22:53:45.476131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:30.859 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:31.118 22:53:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:37.685 22:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:37.685 22:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95832 00:22:37.685 22:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94982 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:37.685 22:53:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:44.263 22:53:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:44.263 22:53:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:44.263 Attaching 4 probes... 00:22:44.263 @path[10.0.0.3, 4421]: 19734 00:22:44.263 @path[10.0.0.3, 4421]: 20168 00:22:44.263 @path[10.0.0.3, 4421]: 20304 00:22:44.263 @path[10.0.0.3, 4421]: 20144 00:22:44.263 @path[10.0.0.3, 4421]: 20213 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95832 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95025 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95025 ']' 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95025 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95025 00:22:44.263 killing process with pid 95025 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95025' 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95025 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95025 00:22:44.263 { 00:22:44.263 "results": [ 00:22:44.263 { 00:22:44.263 "job": "Nvme0n1", 00:22:44.263 "core_mask": "0x4", 00:22:44.263 "workload": "verify", 00:22:44.263 "status": "terminated", 00:22:44.263 "verify_range": { 00:22:44.263 "start": 0, 00:22:44.263 "length": 16384 00:22:44.263 }, 00:22:44.263 "queue_depth": 128, 00:22:44.263 "io_size": 4096, 00:22:44.263 "runtime": 55.471425, 00:22:44.263 "iops": 8629.59622904946, 00:22:44.263 "mibps": 33.709360269724456, 00:22:44.263 "io_failed": 0, 00:22:44.263 "io_timeout": 0, 00:22:44.263 "avg_latency_us": 14808.419032675132, 00:22:44.263 "min_latency_us": 1109.6436363636365, 00:22:44.263 "max_latency_us": 7046430.72 00:22:44.263 } 00:22:44.263 ], 00:22:44.263 "core_count": 1 00:22:44.263 } 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95025 00:22:44.263 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:44.264 [2024-12-07 22:53:01.298250] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:44.264 [2024-12-07 22:53:01.298337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95025 ] 00:22:44.264 [2024-12-07 22:53:01.425202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.264 [2024-12-07 22:53:01.460107] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.264 [2024-12-07 22:53:01.490068] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:44.264 [2024-12-07 22:53:02.468053] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:22:44.264 Running I/O for 90 seconds... 00:22:44.264 7956.00 IOPS, 31.08 MiB/s [2024-12-07T22:53:59.030Z] 8801.50 IOPS, 34.38 MiB/s [2024-12-07T22:53:59.030Z] 9350.33 IOPS, 36.52 MiB/s [2024-12-07T22:53:59.030Z] 9648.75 IOPS, 37.69 MiB/s [2024-12-07T22:53:59.030Z] 9816.60 IOPS, 38.35 MiB/s [2024-12-07T22:53:59.030Z] 9951.17 IOPS, 38.87 MiB/s [2024-12-07T22:53:59.030Z] 10029.00 IOPS, 39.18 MiB/s [2024-12-07T22:53:59.030Z] 10054.38 IOPS, 39.27 MiB/s [2024-12-07T22:53:59.030Z] [2024-12-07 22:53:10.811633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.811752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.811793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.811829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.811862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.811908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.811944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.811978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.811992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.812831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.812981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.812995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.264 [2024-12-07 22:53:10.813428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.813969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.264 [2024-12-07 22:53:10.813984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:44.264 [2024-12-07 22:53:10.814005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.814645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.814973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.814988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.815566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.815600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.815635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.815669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.815711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.815745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.815779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.815799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.815816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.265 [2024-12-07 22:53:10.817087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:10.817843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:10.817860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:44.265 10033.33 IOPS, 39.19 MiB/s [2024-12-07T22:53:59.031Z] 10064.40 IOPS, 39.31 MiB/s [2024-12-07T22:53:59.031Z] 10084.73 IOPS, 39.39 MiB/s [2024-12-07T22:53:59.031Z] 10104.33 IOPS, 39.47 MiB/s [2024-12-07T22:53:59.031Z] 10126.46 IOPS, 39.56 MiB/s [2024-12-07T22:53:59.031Z] 10139.71 IOPS, 39.61 MiB/s [2024-12-07T22:53:59.031Z] [2024-12-07 22:53:17.415792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:17.415844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:44.265 [2024-12-07 22:53:17.415938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.265 [2024-12-07 22:53:17.415960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.415981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.415996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:21016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.416741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.416972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.416990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.266 [2024-12-07 22:53:17.417840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.417974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.417993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.418007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:44.266 [2024-12-07 22:53:17.418026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.266 [2024-12-07 22:53:17.418041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:21208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.418430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.418965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.418999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.419608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.419823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.419838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.267 [2024-12-07 22:53:17.420563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.420969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.420984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.421009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.421024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.421049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.421064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.421090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.421104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.421129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.421144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.421169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.421192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.421219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.421234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:44.267 [2024-12-07 22:53:17.421260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.267 [2024-12-07 22:53:17.421275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:44.267 10001.33 IOPS, 39.07 MiB/s [2024-12-07T22:53:59.033Z] 9499.94 IOPS, 37.11 MiB/s [2024-12-07T22:53:59.033Z] 9545.82 IOPS, 37.29 MiB/s [2024-12-07T22:53:59.033Z] 9591.50 IOPS, 37.47 MiB/s [2024-12-07T22:53:59.033Z] 9632.37 IOPS, 37.63 MiB/s [2024-12-07T22:53:59.033Z] 9669.95 IOPS, 37.77 MiB/s [2024-12-07T22:53:59.033Z] 9705.48 IOPS, 37.91 MiB/s [2024-12-07T22:53:59.034Z] [2024-12-07 22:53:24.503896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.504439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.504572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:129088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.504663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.504751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.504834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.504975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:129104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.505059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.505139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.505216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.505333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.505411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.505487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.505560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.505640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:129136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.505718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.505794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:129144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.505869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.505987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.506070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.506147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.506225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.506323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:129168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.506399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.506476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.506551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.506635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.506767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.506850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:129192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.506938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.507056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:129200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.507140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.507219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.507294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.507371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.507447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.507526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.507602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.507679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.507756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.507833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.507940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.508042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.508120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.508200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.508274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.508357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.508430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.508508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.508593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.508675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.508749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.508828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.508933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.509018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:128656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.509095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.509175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.509257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.509365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.509448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.509530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.509611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.509714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.509790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.509875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.509968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.510057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.510152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.510236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.510330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.510416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:129232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.510492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.510588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.510707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.510796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:129248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.510883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:129256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:129280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:129288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.268 [2024-12-07 22:53:24.511365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.268 [2024-12-07 22:53:24.511830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:44.268 [2024-12-07 22:53:24.511849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.511863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.511881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.511895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.513679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:129336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.513760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.513846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:129344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.513936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.514019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.514102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.514186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:129360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.514264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.514340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:129368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.514419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.514505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:129376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.514580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.514718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:129384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.514803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.514885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:129392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.515000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.515096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.515175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.515255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.515333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.515416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:129416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.515495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.515562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.515632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.515729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.515812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.515902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.516002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:129448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.516175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:129456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.516328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.516973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.516994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:129464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:129472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:129488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:129504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:129512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.517294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:128992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:129008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:129040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.517821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.517842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.269 [2024-12-07 22:53:24.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:44.269 [2024-12-07 22:53:24.518942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.269 [2024-12-07 22:53:24.518965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:44.269 9667.23 IOPS, 37.76 MiB/s [2024-12-07T22:53:59.035Z] 9246.91 IOPS, 36.12 MiB/s [2024-12-07T22:53:59.035Z] 8861.62 IOPS, 34.62 MiB/s [2024-12-07T22:53:59.035Z] 8507.16 IOPS, 33.23 MiB/s [2024-12-07T22:53:59.035Z] 8179.96 IOPS, 31.95 MiB/s [2024-12-07T22:53:59.035Z] 7877.00 IOPS, 30.77 MiB/s [2024-12-07T22:53:59.035Z] 7595.68 IOPS, 29.67 MiB/s [2024-12-07T22:53:59.035Z] 7364.62 IOPS, 28.77 MiB/s [2024-12-07T22:53:59.035Z] 7455.40 IOPS, 29.12 MiB/s [2024-12-07T22:53:59.035Z] 7547.81 IOPS, 29.48 MiB/s [2024-12-07T22:53:59.035Z] 7632.94 IOPS, 29.82 MiB/s [2024-12-07T22:53:59.036Z] 7707.09 IOPS, 30.11 MiB/s [2024-12-07T22:53:59.036Z] 7778.29 IOPS, 30.38 MiB/s [2024-12-07T22:53:59.036Z] 7842.69 IOPS, 30.64 MiB/s [2024-12-07T22:53:59.036Z] [2024-12-07 22:53:37.902432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:122488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.902947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.902992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:122600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.903819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:122616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.903983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.903998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:122704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.904459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.904967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.904981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.270 [2024-12-07 22:53:37.905010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.905028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.905041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.905055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.905068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.905083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.905095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.905109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:122760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.270 [2024-12-07 22:53:37.905123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.270 [2024-12-07 22:53:37.905136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.905665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:122872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:122888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:122904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.905977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.905991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:122952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.906003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.906030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.906066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.906093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.271 [2024-12-07 22:53:37.906123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.906149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.906176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.906202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.906232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.906259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.906286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.271 [2024-12-07 22:53:37.906313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb7860 is same with the state(6) to be set 00:22:44.271 [2024-12-07 22:53:37.906341] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122472 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122992 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123000 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123008 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906531] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123016 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123024 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123032 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123040 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123048 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123056 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123064 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.271 [2024-12-07 22:53:37.906918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.271 [2024-12-07 22:53:37.906929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123072 len:8 PRP1 0x0 PRP2 0x0 00:22:44.271 [2024-12-07 22:53:37.906946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.271 [2024-12-07 22:53:37.906975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.272 [2024-12-07 22:53:37.907001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.272 [2024-12-07 22:53:37.907026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123080 len:8 PRP1 0x0 PRP2 0x0 00:22:44.272 [2024-12-07 22:53:37.907053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.272 [2024-12-07 22:53:37.907074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.272 [2024-12-07 22:53:37.907083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123088 len:8 PRP1 0x0 PRP2 0x0 00:22:44.272 [2024-12-07 22:53:37.907098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907110] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.272 [2024-12-07 22:53:37.907119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.272 [2024-12-07 22:53:37.907128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123096 len:8 PRP1 0x0 PRP2 0x0 00:22:44.272 [2024-12-07 22:53:37.907141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.272 [2024-12-07 22:53:37.907161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.272 [2024-12-07 22:53:37.907171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123104 len:8 PRP1 0x0 PRP2 0x0 00:22:44.272 [2024-12-07 22:53:37.907183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:44.272 [2024-12-07 22:53:37.907203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:44.272 [2024-12-07 22:53:37.907213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123112 len:8 PRP1 0x0 PRP2 0x0 00:22:44.272 [2024-12-07 22:53:37.907231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907273] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fb7860 was disconnected and freed. reset controller. 00:22:44.272 [2024-12-07 22:53:37.907372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.272 [2024-12-07 22:53:37.907396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.272 [2024-12-07 22:53:37.907423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.272 [2024-12-07 22:53:37.907447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:44.272 [2024-12-07 22:53:37.907472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:44.272 [2024-12-07 22:53:37.907498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.272 [2024-12-07 22:53:37.907518] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f744a0 is same with the state(6) to be set 00:22:44.272 [2024-12-07 22:53:37.908493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:44.272 [2024-12-07 22:53:37.908530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f744a0 (9): Bad file descriptor 00:22:44.272 [2024-12-07 22:53:37.908896] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:44.272 [2024-12-07 22:53:37.908928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f744a0 with addr=10.0.0.3, port=4421 00:22:44.272 [2024-12-07 22:53:37.908945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f744a0 is same with the state(6) to be set 00:22:44.272 [2024-12-07 22:53:37.909012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f744a0 (9): Bad file descriptor 00:22:44.272 [2024-12-07 22:53:37.909047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:44.272 [2024-12-07 22:53:37.909063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:44.272 [2024-12-07 22:53:37.909078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:44.272 [2024-12-07 22:53:37.909109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:44.272 [2024-12-07 22:53:37.909125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:44.272 7899.08 IOPS, 30.86 MiB/s [2024-12-07T22:53:59.038Z] 7945.05 IOPS, 31.04 MiB/s [2024-12-07T22:53:59.038Z] 7995.34 IOPS, 31.23 MiB/s [2024-12-07T22:53:59.038Z] 8043.87 IOPS, 31.42 MiB/s [2024-12-07T22:53:59.038Z] 8091.57 IOPS, 31.61 MiB/s [2024-12-07T22:53:59.038Z] 8137.54 IOPS, 31.79 MiB/s [2024-12-07T22:53:59.038Z] 8183.21 IOPS, 31.97 MiB/s [2024-12-07T22:53:59.038Z] 8219.88 IOPS, 32.11 MiB/s [2024-12-07T22:53:59.038Z] 8257.61 IOPS, 32.26 MiB/s [2024-12-07T22:53:59.038Z] 8293.67 IOPS, 32.40 MiB/s [2024-12-07T22:53:59.038Z] [2024-12-07 22:53:47.973410] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:44.272 8332.52 IOPS, 32.55 MiB/s [2024-12-07T22:53:59.038Z] 8372.38 IOPS, 32.70 MiB/s [2024-12-07T22:53:59.038Z] 8409.17 IOPS, 32.85 MiB/s [2024-12-07T22:53:59.038Z] 8448.24 IOPS, 33.00 MiB/s [2024-12-07T22:53:59.038Z] 8474.72 IOPS, 33.10 MiB/s [2024-12-07T22:53:59.038Z] 8504.94 IOPS, 33.22 MiB/s [2024-12-07T22:53:59.038Z] 8535.23 IOPS, 33.34 MiB/s [2024-12-07T22:53:59.038Z] 8564.49 IOPS, 33.46 MiB/s [2024-12-07T22:53:59.038Z] 8593.43 IOPS, 33.57 MiB/s [2024-12-07T22:53:59.038Z] 8622.40 IOPS, 33.68 MiB/s [2024-12-07T22:53:59.038Z] Received shutdown signal, test time was about 55.472207 seconds 00:22:44.272 00:22:44.272 Latency(us) 00:22:44.272 [2024-12-07T22:53:59.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.272 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.272 Verification LBA range: start 0x0 length 0x4000 00:22:44.272 Nvme0n1 : 55.47 8629.60 33.71 0.00 0.00 14808.42 1109.64 7046430.72 00:22:44.272 [2024-12-07T22:53:59.038Z] =================================================================================================================== 00:22:44.272 [2024-12-07T22:53:59.038Z] Total : 8629.60 33.71 0.00 0.00 14808.42 1109.64 7046430.72 00:22:44.272 [2024-12-07 22:53:58.083548] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:44.272 rmmod nvme_tcp 00:22:44.272 rmmod nvme_fabrics 00:22:44.272 rmmod nvme_keyring 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 94982 ']' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 94982 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94982 ']' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94982 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94982 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:44.272 killing process with pid 94982 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94982' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94982 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94982 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.272 22:53:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.532 22:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:44.532 00:22:44.532 real 1m0.269s 00:22:44.532 user 2m47.456s 00:22:44.532 sys 0m17.760s 00:22:44.532 22:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:44.532 22:53:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:44.532 ************************************ 00:22:44.532 END TEST nvmf_host_multipath 00:22:44.532 ************************************ 00:22:44.532 22:53:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:44.532 22:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:44.532 22:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:44.532 22:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.533 ************************************ 00:22:44.533 START TEST nvmf_timeout 00:22:44.533 ************************************ 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:44.533 * Looking for test storage... 00:22:44.533 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:44.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.533 --rc genhtml_branch_coverage=1 00:22:44.533 --rc genhtml_function_coverage=1 00:22:44.533 --rc genhtml_legend=1 00:22:44.533 --rc geninfo_all_blocks=1 00:22:44.533 --rc geninfo_unexecuted_blocks=1 00:22:44.533 00:22:44.533 ' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:44.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.533 --rc genhtml_branch_coverage=1 00:22:44.533 --rc genhtml_function_coverage=1 00:22:44.533 --rc genhtml_legend=1 00:22:44.533 --rc geninfo_all_blocks=1 00:22:44.533 --rc geninfo_unexecuted_blocks=1 00:22:44.533 00:22:44.533 ' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:44.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.533 --rc genhtml_branch_coverage=1 00:22:44.533 --rc genhtml_function_coverage=1 00:22:44.533 --rc genhtml_legend=1 00:22:44.533 --rc geninfo_all_blocks=1 00:22:44.533 --rc geninfo_unexecuted_blocks=1 00:22:44.533 00:22:44.533 ' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:44.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.533 --rc genhtml_branch_coverage=1 00:22:44.533 --rc genhtml_function_coverage=1 00:22:44.533 --rc genhtml_legend=1 00:22:44.533 --rc geninfo_all_blocks=1 00:22:44.533 --rc geninfo_unexecuted_blocks=1 00:22:44.533 00:22:44.533 ' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.533 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:44.533 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:44.534 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:44.793 Cannot find device "nvmf_init_br" 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:44.793 Cannot find device "nvmf_init_br2" 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:44.793 Cannot find device "nvmf_tgt_br" 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:44.793 Cannot find device "nvmf_tgt_br2" 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:44.793 Cannot find device "nvmf_init_br" 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:44.793 Cannot find device "nvmf_init_br2" 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:44.793 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:44.793 Cannot find device "nvmf_tgt_br" 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:44.794 Cannot find device "nvmf_tgt_br2" 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:44.794 Cannot find device "nvmf_br" 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:44.794 Cannot find device "nvmf_init_if" 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:44.794 Cannot find device "nvmf_init_if2" 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:44.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:44.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:44.794 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:45.053 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:45.053 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:45.053 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:45.053 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:45.053 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:45.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:45.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:22:45.054 00:22:45.054 --- 10.0.0.3 ping statistics --- 00:22:45.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.054 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:45.054 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:45.054 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:22:45.054 00:22:45.054 --- 10.0.0.4 ping statistics --- 00:22:45.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.054 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:45.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:45.054 00:22:45.054 --- 10.0.0.1 ping statistics --- 00:22:45.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.054 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:45.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:22:45.054 00:22:45.054 --- 10.0.0.2 ping statistics --- 00:22:45.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.054 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=96184 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 96184 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96184 ']' 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:45.054 22:53:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:45.054 [2024-12-07 22:53:59.760111] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:45.054 [2024-12-07 22:53:59.760190] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.314 [2024-12-07 22:53:59.892061] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:45.314 [2024-12-07 22:53:59.925541] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.314 [2024-12-07 22:53:59.925607] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.314 [2024-12-07 22:53:59.925632] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.314 [2024-12-07 22:53:59.925640] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.314 [2024-12-07 22:53:59.925646] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.314 [2024-12-07 22:53:59.925787] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.314 [2024-12-07 22:53:59.925798] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.314 [2024-12-07 22:53:59.954210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:45.314 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:45.882 [2024-12-07 22:54:00.350447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.882 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:46.141 Malloc0 00:22:46.141 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.401 22:54:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:46.660 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:46.660 [2024-12-07 22:54:01.416575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96230 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96230 /var/tmp/bdevperf.sock 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96230 ']' 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:46.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:46.919 22:54:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:46.919 [2024-12-07 22:54:01.493087] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:46.919 [2024-12-07 22:54:01.493200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96230 ] 00:22:46.919 [2024-12-07 22:54:01.632238] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.919 [2024-12-07 22:54:01.674584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.184 [2024-12-07 22:54:01.708735] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:47.753 22:54:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:47.753 22:54:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:47.753 22:54:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:48.013 22:54:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:48.273 NVMe0n1 00:22:48.273 22:54:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96255 00:22:48.273 22:54:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:48.273 22:54:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:48.531 Running I/O for 10 seconds... 00:22:49.469 22:54:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:49.469 7716.00 IOPS, 30.14 MiB/s [2024-12-07T22:54:04.235Z] [2024-12-07 22:54:04.174593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:68640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.174970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.174995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.469 [2024-12-07 22:54:04.175005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.175015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-07 22:54:04.175023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.175047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-07 22:54:04.175072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.175083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-07 22:54:04.175091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.469 [2024-12-07 22:54:04.175101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:67776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.469 [2024-12-07 22:54:04.175110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:67824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:67864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.470 [2024-12-07 22:54:04.175345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.470 [2024-12-07 22:54:04.175364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:67872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:67880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:67888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:67896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.470 [2024-12-07 22:54:04.175511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:67936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:67960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:67968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:68056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.470 [2024-12-07 22:54:04.175845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.470 [2024-12-07 22:54:04.175855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.175863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.175874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.175898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.175908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.175917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.175927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.175936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.175948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.175965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.175984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.175994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:68128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:68160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:68168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:68208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:68216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:68280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:68352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.471 [2024-12-07 22:54:04.176639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.471 [2024-12-07 22:54:04.176648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:68432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:68480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.176982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:68528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.176991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:68536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:68600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.472 [2024-12-07 22:54:04.177201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x87a670 is same with the state(6) to be set 00:22:49.472 [2024-12-07 22:54:04.177222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.472 [2024-12-07 22:54:04.177229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.472 [2024-12-07 22:54:04.177237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68624 len:8 PRP1 0x0 PRP2 0x0 00:22:49.472 [2024-12-07 22:54:04.177245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.472 [2024-12-07 22:54:04.177283] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x87a670 was disconnected and freed. reset controller. 00:22:49.472 [2024-12-07 22:54:04.177510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.472 [2024-12-07 22:54:04.177586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859630 (9): Bad file descriptor 00:22:49.472 [2024-12-07 22:54:04.177691] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.472 [2024-12-07 22:54:04.177726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x859630 with addr=10.0.0.3, port=4420 00:22:49.472 [2024-12-07 22:54:04.177736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859630 is same with the state(6) to be set 00:22:49.472 [2024-12-07 22:54:04.177753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859630 (9): Bad file descriptor 00:22:49.472 [2024-12-07 22:54:04.177768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:49.472 [2024-12-07 22:54:04.177780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:49.472 [2024-12-07 22:54:04.177790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:49.472 [2024-12-07 22:54:04.177810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.472 [2024-12-07 22:54:04.177836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:49.472 22:54:04 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:51.368 4234.50 IOPS, 16.54 MiB/s [2024-12-07T22:54:06.393Z] 2823.00 IOPS, 11.03 MiB/s [2024-12-07T22:54:06.393Z] [2024-12-07 22:54:06.177947] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.627 [2024-12-07 22:54:06.178007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x859630 with addr=10.0.0.3, port=4420 00:22:51.627 [2024-12-07 22:54:06.178022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859630 is same with the state(6) to be set 00:22:51.627 [2024-12-07 22:54:06.178041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859630 (9): Bad file descriptor 00:22:51.627 [2024-12-07 22:54:06.178058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.627 [2024-12-07 22:54:06.178067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:51.627 [2024-12-07 22:54:06.178076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.627 [2024-12-07 22:54:06.178108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.627 [2024-12-07 22:54:06.178121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.627 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:51.627 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.627 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:51.885 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:51.885 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:51.885 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:51.885 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:52.145 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:52.145 22:54:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:53.338 2117.25 IOPS, 8.27 MiB/s [2024-12-07T22:54:08.363Z] 1693.80 IOPS, 6.62 MiB/s [2024-12-07T22:54:08.363Z] [2024-12-07 22:54:08.178307] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.597 [2024-12-07 22:54:08.178372] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x859630 with addr=10.0.0.3, port=4420 00:22:53.597 [2024-12-07 22:54:08.178388] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x859630 is same with the state(6) to be set 00:22:53.597 [2024-12-07 22:54:08.178410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x859630 (9): Bad file descriptor 00:22:53.597 [2024-12-07 22:54:08.178427] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:53.597 [2024-12-07 22:54:08.178437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:53.597 [2024-12-07 22:54:08.178446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:53.597 [2024-12-07 22:54:08.178469] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:53.597 [2024-12-07 22:54:08.178479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:55.468 1411.50 IOPS, 5.51 MiB/s [2024-12-07T22:54:10.234Z] 1209.86 IOPS, 4.73 MiB/s [2024-12-07T22:54:10.234Z] [2024-12-07 22:54:10.178574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:55.468 [2024-12-07 22:54:10.178615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:55.468 [2024-12-07 22:54:10.178663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:55.468 [2024-12-07 22:54:10.178672] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:22:55.468 [2024-12-07 22:54:10.178699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.664 1058.62 IOPS, 4.14 MiB/s 00:22:56.664 Latency(us) 00:22:56.664 [2024-12-07T22:54:11.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.664 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:56.664 Verification LBA range: start 0x0 length 0x4000 00:22:56.664 NVMe0n1 : 8.10 1046.03 4.09 15.81 0.00 120386.79 3634.27 7015926.69 00:22:56.664 [2024-12-07T22:54:11.430Z] =================================================================================================================== 00:22:56.664 [2024-12-07T22:54:11.430Z] Total : 1046.03 4.09 15.81 0.00 120386.79 3634.27 7015926.69 00:22:56.664 { 00:22:56.664 "results": [ 00:22:56.664 { 00:22:56.664 "job": "NVMe0n1", 00:22:56.664 "core_mask": "0x4", 00:22:56.664 "workload": "verify", 00:22:56.664 "status": "finished", 00:22:56.664 "verify_range": { 00:22:56.664 "start": 0, 00:22:56.664 "length": 16384 00:22:56.664 }, 00:22:56.664 "queue_depth": 128, 00:22:56.664 "io_size": 4096, 00:22:56.664 "runtime": 8.096338, 00:22:56.664 "iops": 1046.0284637326158, 00:22:56.664 "mibps": 4.0860486864555305, 00:22:56.665 "io_failed": 128, 00:22:56.665 "io_timeout": 0, 00:22:56.665 "avg_latency_us": 120386.79307919253, 00:22:56.665 "min_latency_us": 3634.269090909091, 00:22:56.665 "max_latency_us": 7015926.69090909 00:22:56.665 } 00:22:56.665 ], 00:22:56.665 "core_count": 1 00:22:56.665 } 00:22:57.232 22:54:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:22:57.232 22:54:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:57.232 22:54:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96255 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96230 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96230 ']' 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96230 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:57.492 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96230 00:22:57.751 killing process with pid 96230 00:22:57.751 Received shutdown signal, test time was about 9.194061 seconds 00:22:57.751 00:22:57.751 Latency(us) 00:22:57.751 [2024-12-07T22:54:12.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.751 [2024-12-07T22:54:12.517Z] =================================================================================================================== 00:22:57.751 [2024-12-07T22:54:12.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:57.751 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:57.751 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:57.751 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96230' 00:22:57.751 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96230 00:22:57.751 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96230 00:22:57.751 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:58.011 [2024-12-07 22:54:12.675952] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:58.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96372 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96372 /var/tmp/bdevperf.sock 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96372 ']' 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:58.011 22:54:12 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:58.011 [2024-12-07 22:54:12.747043] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:22:58.011 [2024-12-07 22:54:12.747345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96372 ] 00:22:58.270 [2024-12-07 22:54:12.883814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.270 [2024-12-07 22:54:12.917330] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.270 [2024-12-07 22:54:12.946290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:59.208 22:54:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.208 22:54:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:59.208 22:54:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:59.208 22:54:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:22:59.467 NVMe0n1 00:22:59.467 22:54:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.467 22:54:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96396 00:22:59.467 22:54:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:22:59.727 Running I/O for 10 seconds... 00:23:00.664 22:54:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:00.927 7957.00 IOPS, 31.08 MiB/s [2024-12-07T22:54:15.693Z] [2024-12-07 22:54:15.441220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.927 [2024-12-07 22:54:15.441602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.927 [2024-12-07 22:54:15.441611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.441970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.441995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.928 [2024-12-07 22:54:15.442391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.928 [2024-12-07 22:54:15.442401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.442702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.442985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.442997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.443098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.443117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:00.929 [2024-12-07 22:54:15.443276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.929 [2024-12-07 22:54:15.443286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.929 [2024-12-07 22:54:15.443295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:00.930 [2024-12-07 22:54:15.443895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443907] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc79c0 is same with the state(6) to be set 00:23:00.930 [2024-12-07 22:54:15.443929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:00.930 [2024-12-07 22:54:15.443937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:00.930 [2024-12-07 22:54:15.443945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73296 len:8 PRP1 0x0 PRP2 0x0 00:23:00.930 [2024-12-07 22:54:15.443954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.930 [2024-12-07 22:54:15.443995] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbc79c0 was disconnected and freed. reset controller. 00:23:00.930 [2024-12-07 22:54:15.444243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.930 [2024-12-07 22:54:15.444323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:00.930 [2024-12-07 22:54:15.444422] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.930 [2024-12-07 22:54:15.444443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68b0 with addr=10.0.0.3, port=4420 00:23:00.930 [2024-12-07 22:54:15.444453] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68b0 is same with the state(6) to be set 00:23:00.930 [2024-12-07 22:54:15.444471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:00.930 [2024-12-07 22:54:15.444500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.930 [2024-12-07 22:54:15.444509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.930 [2024-12-07 22:54:15.444518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.930 [2024-12-07 22:54:15.444538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.930 [2024-12-07 22:54:15.444548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:00.930 22:54:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:01.801 4554.50 IOPS, 17.79 MiB/s [2024-12-07T22:54:16.567Z] [2024-12-07 22:54:16.444645] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:01.801 [2024-12-07 22:54:16.444693] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68b0 with addr=10.0.0.3, port=4420 00:23:01.801 [2024-12-07 22:54:16.444723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68b0 is same with the state(6) to be set 00:23:01.801 [2024-12-07 22:54:16.444742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:01.801 [2024-12-07 22:54:16.444758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:01.801 [2024-12-07 22:54:16.444767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:01.801 [2024-12-07 22:54:16.444776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:01.801 [2024-12-07 22:54:16.444798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:01.801 [2024-12-07 22:54:16.444809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:01.801 22:54:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:02.059 [2024-12-07 22:54:16.716834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:02.059 22:54:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96396 00:23:02.884 3036.33 IOPS, 11.86 MiB/s [2024-12-07T22:54:17.650Z] [2024-12-07 22:54:17.462331] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:04.758 2277.25 IOPS, 8.90 MiB/s [2024-12-07T22:54:20.460Z] 3627.80 IOPS, 14.17 MiB/s [2024-12-07T22:54:21.398Z] 4811.17 IOPS, 18.79 MiB/s [2024-12-07T22:54:22.335Z] 5670.14 IOPS, 22.15 MiB/s [2024-12-07T22:54:23.713Z] 6317.38 IOPS, 24.68 MiB/s [2024-12-07T22:54:24.649Z] 6827.89 IOPS, 26.67 MiB/s [2024-12-07T22:54:24.649Z] 7226.70 IOPS, 28.23 MiB/s 00:23:09.883 Latency(us) 00:23:09.883 [2024-12-07T22:54:24.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.883 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:09.883 Verification LBA range: start 0x0 length 0x4000 00:23:09.883 NVMe0n1 : 10.01 7233.87 28.26 0.00 0.00 17660.20 1109.64 3019898.88 00:23:09.884 [2024-12-07T22:54:24.650Z] =================================================================================================================== 00:23:09.884 [2024-12-07T22:54:24.650Z] Total : 7233.87 28.26 0.00 0.00 17660.20 1109.64 3019898.88 00:23:09.884 { 00:23:09.884 "results": [ 00:23:09.884 { 00:23:09.884 "job": "NVMe0n1", 00:23:09.884 "core_mask": "0x4", 00:23:09.884 "workload": "verify", 00:23:09.884 "status": "finished", 00:23:09.884 "verify_range": { 00:23:09.884 "start": 0, 00:23:09.884 "length": 16384 00:23:09.884 }, 00:23:09.884 "queue_depth": 128, 00:23:09.884 "io_size": 4096, 00:23:09.884 "runtime": 10.007789, 00:23:09.884 "iops": 7233.865542129235, 00:23:09.884 "mibps": 28.257287273942325, 00:23:09.884 "io_failed": 0, 00:23:09.884 "io_timeout": 0, 00:23:09.884 "avg_latency_us": 17660.199506897137, 00:23:09.884 "min_latency_us": 1109.6436363636365, 00:23:09.884 "max_latency_us": 3019898.88 00:23:09.884 } 00:23:09.884 ], 00:23:09.884 "core_count": 1 00:23:09.884 } 00:23:09.884 22:54:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96496 00:23:09.884 22:54:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:09.884 22:54:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:09.884 Running I/O for 10 seconds... 00:23:10.831 22:54:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:10.831 8084.00 IOPS, 31.58 MiB/s [2024-12-07T22:54:25.597Z] [2024-12-07 22:54:25.579343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.831 [2024-12-07 22:54:25.579558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.579998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.832 [2024-12-07 22:54:25.580157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5001a0 is same with the state(6) to be set 00:23:10.833 [2024-12-07 22:54:25.580426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.833 [2024-12-07 22:54:25.580939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.833 [2024-12-07 22:54:25.580948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.580958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.580966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.580976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.580985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.580995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.834 [2024-12-07 22:54:25.581688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.834 [2024-12-07 22:54:25.581699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.581987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.581997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.835 [2024-12-07 22:54:25.582484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.835 [2024-12-07 22:54:25.582492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.582691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.582985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.582993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:10.836 [2024-12-07 22:54:25.583010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:10.836 [2024-12-07 22:54:25.583046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc6340 is same with the state(6) to be set 00:23:10.836 [2024-12-07 22:54:25.583066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:10.836 [2024-12-07 22:54:25.583073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:10.836 [2024-12-07 22:54:25.583081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:23:10.836 [2024-12-07 22:54:25.583089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583127] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbc6340 was disconnected and freed. reset controller. 00:23:10.836 [2024-12-07 22:54:25.583205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-12-07 22:54:25.583220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-12-07 22:54:25.583239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-12-07 22:54:25.583256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.836 [2024-12-07 22:54:25.583276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.836 [2024-12-07 22:54:25.583284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68b0 is same with the state(6) to be set 00:23:10.836 [2024-12-07 22:54:25.583493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:10.836 [2024-12-07 22:54:25.583513] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:10.836 [2024-12-07 22:54:25.583614] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.836 [2024-12-07 22:54:25.583635] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68b0 with addr=10.0.0.3, port=4420 00:23:10.836 [2024-12-07 22:54:25.583645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68b0 is same with the state(6) to be set 00:23:10.837 [2024-12-07 22:54:25.583662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:10.837 [2024-12-07 22:54:25.583676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:10.837 [2024-12-07 22:54:25.583685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:10.837 [2024-12-07 22:54:25.583694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:10.837 [2024-12-07 22:54:25.583712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:11.173 22:54:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:11.173 [2024-12-07 22:54:25.599252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.008 4626.00 IOPS, 18.07 MiB/s [2024-12-07T22:54:26.774Z] [2024-12-07 22:54:26.599385] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.008 [2024-12-07 22:54:26.599612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68b0 with addr=10.0.0.3, port=4420 00:23:12.008 [2024-12-07 22:54:26.599637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68b0 is same with the state(6) to be set 00:23:12.008 [2024-12-07 22:54:26.599664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:12.008 [2024-12-07 22:54:26.599683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.008 [2024-12-07 22:54:26.599692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.008 [2024-12-07 22:54:26.599710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.008 [2024-12-07 22:54:26.599751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.008 [2024-12-07 22:54:26.599766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:12.943 3084.00 IOPS, 12.05 MiB/s [2024-12-07T22:54:27.709Z] [2024-12-07 22:54:27.599864] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.943 [2024-12-07 22:54:27.599947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68b0 with addr=10.0.0.3, port=4420 00:23:12.943 [2024-12-07 22:54:27.599967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68b0 is same with the state(6) to be set 00:23:12.943 [2024-12-07 22:54:27.599986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:12.943 [2024-12-07 22:54:27.600013] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:12.943 [2024-12-07 22:54:27.600023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:12.943 [2024-12-07 22:54:27.600048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:12.943 [2024-12-07 22:54:27.600070] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:12.943 [2024-12-07 22:54:27.600081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:13.876 2313.00 IOPS, 9.04 MiB/s [2024-12-07T22:54:28.643Z] [2024-12-07 22:54:28.600407] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.877 [2024-12-07 22:54:28.600466] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xba68b0 with addr=10.0.0.3, port=4420 00:23:13.877 [2024-12-07 22:54:28.600481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xba68b0 is same with the state(6) to be set 00:23:13.877 [2024-12-07 22:54:28.600713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba68b0 (9): Bad file descriptor 00:23:13.877 [2024-12-07 22:54:28.600979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:13.877 [2024-12-07 22:54:28.601000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:13.877 [2024-12-07 22:54:28.601010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:13.877 22:54:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:13.877 [2024-12-07 22:54:28.604734] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:13.877 [2024-12-07 22:54:28.604954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:14.135 [2024-12-07 22:54:28.859909] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:14.135 22:54:28 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96496 00:23:14.961 1850.40 IOPS, 7.23 MiB/s [2024-12-07T22:54:29.727Z] [2024-12-07 22:54:29.644083] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:16.834 2957.83 IOPS, 11.55 MiB/s [2024-12-07T22:54:32.541Z] 4073.57 IOPS, 15.91 MiB/s [2024-12-07T22:54:33.476Z] 4922.38 IOPS, 19.23 MiB/s [2024-12-07T22:54:34.853Z] 5579.00 IOPS, 21.79 MiB/s [2024-12-07T22:54:34.853Z] 6105.90 IOPS, 23.85 MiB/s 00:23:20.087 Latency(us) 00:23:20.087 [2024-12-07T22:54:34.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.087 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:20.087 Verification LBA range: start 0x0 length 0x4000 00:23:20.087 NVMe0n1 : 10.01 6109.08 23.86 4278.72 0.00 12293.08 569.72 3019898.88 00:23:20.087 [2024-12-07T22:54:34.853Z] =================================================================================================================== 00:23:20.087 [2024-12-07T22:54:34.853Z] Total : 6109.08 23.86 4278.72 0.00 12293.08 0.00 3019898.88 00:23:20.087 { 00:23:20.087 "results": [ 00:23:20.087 { 00:23:20.087 "job": "NVMe0n1", 00:23:20.087 "core_mask": "0x4", 00:23:20.087 "workload": "verify", 00:23:20.087 "status": "finished", 00:23:20.087 "verify_range": { 00:23:20.087 "start": 0, 00:23:20.087 "length": 16384 00:23:20.087 }, 00:23:20.087 "queue_depth": 128, 00:23:20.087 "io_size": 4096, 00:23:20.087 "runtime": 10.007898, 00:23:20.087 "iops": 6109.075052523517, 00:23:20.087 "mibps": 23.86357442391999, 00:23:20.087 "io_failed": 42821, 00:23:20.087 "io_timeout": 0, 00:23:20.088 "avg_latency_us": 12293.07778852006, 00:23:20.088 "min_latency_us": 569.7163636363637, 00:23:20.088 "max_latency_us": 3019898.88 00:23:20.088 } 00:23:20.088 ], 00:23:20.088 "core_count": 1 00:23:20.088 } 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96372 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96372 ']' 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96372 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96372 00:23:20.088 killing process with pid 96372 00:23:20.088 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.088 00:23:20.088 Latency(us) 00:23:20.088 [2024-12-07T22:54:34.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.088 [2024-12-07T22:54:34.854Z] =================================================================================================================== 00:23:20.088 [2024-12-07T22:54:34.854Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96372' 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96372 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96372 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96615 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96615 /var/tmp/bdevperf.sock 00:23:20.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96615 ']' 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:20.088 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:20.088 [2024-12-07 22:54:34.675163] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:20.088 [2024-12-07 22:54:34.675447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96615 ] 00:23:20.088 [2024-12-07 22:54:34.809750] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.088 [2024-12-07 22:54:34.843774] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.348 [2024-12-07 22:54:34.873387] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:20.348 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:20.348 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:20.348 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96618 00:23:20.348 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96615 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:20.348 22:54:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:20.607 22:54:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:20.866 NVMe0n1 00:23:20.866 22:54:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96658 00:23:20.866 22:54:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:20.866 22:54:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:20.866 Running I/O for 10 seconds... 00:23:21.799 22:54:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:22.060 17399.00 IOPS, 67.96 MiB/s [2024-12-07T22:54:36.826Z] [2024-12-07 22:54:36.753070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.060 [2024-12-07 22:54:36.753503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fcb60 is same with the state(6) to be set 00:23:22.061 [2024-12-07 22:54:36.753871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.753900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.753951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.753965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.753976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.753985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:45672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:26352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.061 [2024-12-07 22:54:36.754680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.061 [2024-12-07 22:54:36.754692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.754987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.754996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:52592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:25256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:89512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.062 [2024-12-07 22:54:36.755500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.062 [2024-12-07 22:54:36.755509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.755986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.755996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:125168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.063 [2024-12-07 22:54:36.756305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.063 [2024-12-07 22:54:36.756316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:22.064 [2024-12-07 22:54:36.756635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x147d810 is same with the state(6) to be set 00:23:22.064 [2024-12-07 22:54:36.756657] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:22.064 [2024-12-07 22:54:36.756664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:22.064 [2024-12-07 22:54:36.756672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62488 len:8 PRP1 0x0 PRP2 0x0 00:23:22.064 [2024-12-07 22:54:36.756681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:22.064 [2024-12-07 22:54:36.756724] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x147d810 was disconnected and freed. reset controller. 00:23:22.064 [2024-12-07 22:54:36.757081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.064 [2024-12-07 22:54:36.757176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145c650 (9): Bad file descriptor 00:23:22.064 [2024-12-07 22:54:36.757281] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:22.064 [2024-12-07 22:54:36.757303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145c650 with addr=10.0.0.3, port=4420 00:23:22.064 [2024-12-07 22:54:36.757314] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c650 is same with the state(6) to be set 00:23:22.064 [2024-12-07 22:54:36.757333] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145c650 (9): Bad file descriptor 00:23:22.064 [2024-12-07 22:54:36.757354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:22.064 [2024-12-07 22:54:36.757364] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:22.064 [2024-12-07 22:54:36.757374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:22.064 [2024-12-07 22:54:36.757394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:22.064 [2024-12-07 22:54:36.757405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:22.064 22:54:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96658 00:23:23.935 9812.00 IOPS, 38.33 MiB/s [2024-12-07T22:54:38.961Z] 6541.33 IOPS, 25.55 MiB/s [2024-12-07T22:54:38.961Z] [2024-12-07 22:54:38.757555] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.195 [2024-12-07 22:54:38.757620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145c650 with addr=10.0.0.3, port=4420 00:23:24.195 [2024-12-07 22:54:38.757636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c650 is same with the state(6) to be set 00:23:24.195 [2024-12-07 22:54:38.757671] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145c650 (9): Bad file descriptor 00:23:24.195 [2024-12-07 22:54:38.757696] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.195 [2024-12-07 22:54:38.757705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:24.195 [2024-12-07 22:54:38.757715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.195 [2024-12-07 22:54:38.757738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.195 [2024-12-07 22:54:38.757749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:26.069 4906.00 IOPS, 19.16 MiB/s [2024-12-07T22:54:40.835Z] 3924.80 IOPS, 15.33 MiB/s [2024-12-07T22:54:40.836Z] [2024-12-07 22:54:40.757898] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.070 [2024-12-07 22:54:40.757963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145c650 with addr=10.0.0.3, port=4420 00:23:26.070 [2024-12-07 22:54:40.757978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145c650 is same with the state(6) to be set 00:23:26.070 [2024-12-07 22:54:40.758000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145c650 (9): Bad file descriptor 00:23:26.070 [2024-12-07 22:54:40.758030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:26.070 [2024-12-07 22:54:40.758041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:26.070 [2024-12-07 22:54:40.758051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:26.070 [2024-12-07 22:54:40.758074] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:26.070 [2024-12-07 22:54:40.758085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:27.944 3270.67 IOPS, 12.78 MiB/s [2024-12-07T22:54:42.969Z] 2803.43 IOPS, 10.95 MiB/s [2024-12-07T22:54:42.969Z] [2024-12-07 22:54:42.758127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.203 [2024-12-07 22:54:42.758167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.203 [2024-12-07 22:54:42.758179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:28.203 [2024-12-07 22:54:42.758188] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:28.203 [2024-12-07 22:54:42.758210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:29.141 2453.00 IOPS, 9.58 MiB/s 00:23:29.141 Latency(us) 00:23:29.141 [2024-12-07T22:54:43.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.141 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:29.141 NVMe0n1 : 8.15 2408.38 9.41 15.71 0.00 52746.73 6940.86 7015926.69 00:23:29.141 [2024-12-07T22:54:43.907Z] =================================================================================================================== 00:23:29.141 [2024-12-07T22:54:43.907Z] Total : 2408.38 9.41 15.71 0.00 52746.73 6940.86 7015926.69 00:23:29.141 { 00:23:29.141 "results": [ 00:23:29.141 { 00:23:29.141 "job": "NVMe0n1", 00:23:29.141 "core_mask": "0x4", 00:23:29.141 "workload": "randread", 00:23:29.141 "status": "finished", 00:23:29.141 "queue_depth": 128, 00:23:29.141 "io_size": 4096, 00:23:29.141 "runtime": 8.14822, 00:23:29.141 "iops": 2408.3787624781853, 00:23:29.141 "mibps": 9.407729540930411, 00:23:29.141 "io_failed": 128, 00:23:29.141 "io_timeout": 0, 00:23:29.141 "avg_latency_us": 52746.726757244374, 00:23:29.141 "min_latency_us": 6940.858181818182, 00:23:29.141 "max_latency_us": 7015926.69090909 00:23:29.141 } 00:23:29.141 ], 00:23:29.141 "core_count": 1 00:23:29.141 } 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.141 Attaching 5 probes... 00:23:29.141 1375.709750: reset bdev controller NVMe0 00:23:29.141 1375.858844: reconnect bdev controller NVMe0 00:23:29.141 3376.065425: reconnect delay bdev controller NVMe0 00:23:29.141 3376.082985: reconnect bdev controller NVMe0 00:23:29.141 5376.409714: reconnect delay bdev controller NVMe0 00:23:29.141 5376.442994: reconnect bdev controller NVMe0 00:23:29.141 7376.745110: reconnect delay bdev controller NVMe0 00:23:29.141 7376.763071: reconnect bdev controller NVMe0 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96618 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96615 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96615 ']' 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96615 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96615 00:23:29.141 killing process with pid 96615 00:23:29.141 Received shutdown signal, test time was about 8.215460 seconds 00:23:29.141 00:23:29.141 Latency(us) 00:23:29.141 [2024-12-07T22:54:43.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.141 [2024-12-07T22:54:43.907Z] =================================================================================================================== 00:23:29.141 [2024-12-07T22:54:43.907Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96615' 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96615 00:23:29.141 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96615 00:23:29.401 22:54:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:29.659 rmmod nvme_tcp 00:23:29.659 rmmod nvme_fabrics 00:23:29.659 rmmod nvme_keyring 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 96184 ']' 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 96184 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96184 ']' 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96184 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96184 00:23:29.659 killing process with pid 96184 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96184' 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96184 00:23:29.659 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96184 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:29.918 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:30.177 ************************************ 00:23:30.177 END TEST nvmf_timeout 00:23:30.177 ************************************ 00:23:30.177 00:23:30.177 real 0m45.634s 00:23:30.177 user 2m13.892s 00:23:30.177 sys 0m5.389s 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:30.177 00:23:30.177 real 5m39.988s 00:23:30.177 user 15m57.241s 00:23:30.177 sys 1m15.558s 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.177 22:54:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.177 ************************************ 00:23:30.177 END TEST nvmf_host 00:23:30.177 ************************************ 00:23:30.177 22:54:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:30.177 22:54:44 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:30.177 ************************************ 00:23:30.177 END TEST nvmf_tcp 00:23:30.177 ************************************ 00:23:30.177 00:23:30.177 real 14m57.919s 00:23:30.177 user 39m26.457s 00:23:30.177 sys 4m1.070s 00:23:30.177 22:54:44 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.177 22:54:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:30.177 22:54:44 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:23:30.177 22:54:44 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:30.177 22:54:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:30.177 22:54:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.177 22:54:44 -- common/autotest_common.sh@10 -- # set +x 00:23:30.177 ************************************ 00:23:30.177 START TEST nvmf_dif 00:23:30.177 ************************************ 00:23:30.177 22:54:44 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:30.177 * Looking for test storage... 00:23:30.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:30.177 22:54:44 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:30.177 22:54:44 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:23:30.177 22:54:44 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:30.435 22:54:45 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:30.435 22:54:45 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:30.435 22:54:45 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:30.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.435 --rc genhtml_branch_coverage=1 00:23:30.435 --rc genhtml_function_coverage=1 00:23:30.435 --rc genhtml_legend=1 00:23:30.435 --rc geninfo_all_blocks=1 00:23:30.435 --rc geninfo_unexecuted_blocks=1 00:23:30.435 00:23:30.435 ' 00:23:30.435 22:54:45 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:30.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.435 --rc genhtml_branch_coverage=1 00:23:30.435 --rc genhtml_function_coverage=1 00:23:30.435 --rc genhtml_legend=1 00:23:30.435 --rc geninfo_all_blocks=1 00:23:30.435 --rc geninfo_unexecuted_blocks=1 00:23:30.435 00:23:30.435 ' 00:23:30.435 22:54:45 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:30.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.435 --rc genhtml_branch_coverage=1 00:23:30.435 --rc genhtml_function_coverage=1 00:23:30.435 --rc genhtml_legend=1 00:23:30.435 --rc geninfo_all_blocks=1 00:23:30.435 --rc geninfo_unexecuted_blocks=1 00:23:30.435 00:23:30.435 ' 00:23:30.435 22:54:45 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:30.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:30.435 --rc genhtml_branch_coverage=1 00:23:30.435 --rc genhtml_function_coverage=1 00:23:30.435 --rc genhtml_legend=1 00:23:30.435 --rc geninfo_all_blocks=1 00:23:30.435 --rc geninfo_unexecuted_blocks=1 00:23:30.435 00:23:30.435 ' 00:23:30.435 22:54:45 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.435 22:54:45 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.435 22:54:45 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.435 22:54:45 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.435 22:54:45 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.435 22:54:45 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:30.435 22:54:45 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:30.435 22:54:45 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:30.436 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:30.436 22:54:45 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:30.436 22:54:45 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:30.436 22:54:45 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:30.436 22:54:45 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:30.436 22:54:45 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.436 22:54:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:30.436 22:54:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:30.436 Cannot find device "nvmf_init_br" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:30.436 Cannot find device "nvmf_init_br2" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:30.436 Cannot find device "nvmf_tgt_br" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:30.436 Cannot find device "nvmf_tgt_br2" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:30.436 Cannot find device "nvmf_init_br" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:30.436 Cannot find device "nvmf_init_br2" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:30.436 Cannot find device "nvmf_tgt_br" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:30.436 Cannot find device "nvmf_tgt_br2" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:30.436 Cannot find device "nvmf_br" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:30.436 Cannot find device "nvmf_init_if" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:30.436 Cannot find device "nvmf_init_if2" 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:30.436 22:54:45 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:30.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:30.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:30.693 22:54:45 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:30.694 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:30.694 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:23:30.694 00:23:30.694 --- 10.0.0.3 ping statistics --- 00:23:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.694 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:30.694 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:30.694 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:23:30.694 00:23:30.694 --- 10.0.0.4 ping statistics --- 00:23:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.694 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:30.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:30.694 00:23:30.694 --- 10.0.0.1 ping statistics --- 00:23:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.694 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:30.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:23:30.694 00:23:30.694 --- 10.0.0.2 ping statistics --- 00:23:30.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.694 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:23:30.694 22:54:45 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:31.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:31.259 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:31.259 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:31.259 22:54:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:31.259 22:54:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=97152 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:31.259 22:54:45 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 97152 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97152 ']' 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:31.259 22:54:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:31.259 [2024-12-07 22:54:45.934312] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:23:31.259 [2024-12-07 22:54:45.934593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:31.517 [2024-12-07 22:54:46.073456] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.517 [2024-12-07 22:54:46.117768] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:31.517 [2024-12-07 22:54:46.117840] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:31.517 [2024-12-07 22:54:46.117856] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:31.517 [2024-12-07 22:54:46.117866] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:31.517 [2024-12-07 22:54:46.117896] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:31.517 [2024-12-07 22:54:46.117930] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.517 [2024-12-07 22:54:46.155262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:23:31.517 22:54:46 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:31.517 22:54:46 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:31.517 22:54:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:31.517 22:54:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:31.517 [2024-12-07 22:54:46.251992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.517 22:54:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:31.517 22:54:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:31.517 ************************************ 00:23:31.517 START TEST fio_dif_1_default 00:23:31.517 ************************************ 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:31.517 bdev_null0 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.517 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:31.775 [2024-12-07 22:54:46.300141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:31.775 { 00:23:31.775 "params": { 00:23:31.775 "name": "Nvme$subsystem", 00:23:31.775 "trtype": "$TEST_TRANSPORT", 00:23:31.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:31.775 "adrfam": "ipv4", 00:23:31.775 "trsvcid": "$NVMF_PORT", 00:23:31.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:31.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:31.775 "hdgst": ${hdgst:-false}, 00:23:31.775 "ddgst": ${ddgst:-false} 00:23:31.775 }, 00:23:31.775 "method": "bdev_nvme_attach_controller" 00:23:31.775 } 00:23:31.775 EOF 00:23:31.775 )") 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:23:31.775 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:31.776 "params": { 00:23:31.776 "name": "Nvme0", 00:23:31.776 "trtype": "tcp", 00:23:31.776 "traddr": "10.0.0.3", 00:23:31.776 "adrfam": "ipv4", 00:23:31.776 "trsvcid": "4420", 00:23:31.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:31.776 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:31.776 "hdgst": false, 00:23:31.776 "ddgst": false 00:23:31.776 }, 00:23:31.776 "method": "bdev_nvme_attach_controller" 00:23:31.776 }' 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:31.776 22:54:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:32.034 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:32.034 fio-3.35 00:23:32.034 Starting 1 thread 00:23:44.237 00:23:44.237 filename0: (groupid=0, jobs=1): err= 0: pid=97210: Sat Dec 7 22:54:56 2024 00:23:44.237 read: IOPS=9875, BW=38.6MiB/s (40.4MB/s)(386MiB/10001msec) 00:23:44.237 slat (nsec): min=5824, max=54735, avg=7763.81, stdev=3554.08 00:23:44.237 clat (usec): min=318, max=4819, avg=381.62, stdev=48.13 00:23:44.237 lat (usec): min=324, max=4858, avg=389.38, stdev=48.92 00:23:44.237 clat percentiles (usec): 00:23:44.237 | 1.00th=[ 326], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 347], 00:23:44.237 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 383], 00:23:44.237 | 70.00th=[ 396], 80.00th=[ 408], 90.00th=[ 429], 95.00th=[ 453], 00:23:44.237 | 99.00th=[ 506], 99.50th=[ 523], 99.90th=[ 562], 99.95th=[ 578], 00:23:44.237 | 99.99th=[ 857] 00:23:44.237 bw ( KiB/s): min=37824, max=40512, per=99.98%, avg=39493.05, stdev=700.13, samples=19 00:23:44.237 iops : min= 9456, max=10128, avg=9873.26, stdev=175.03, samples=19 00:23:44.237 lat (usec) : 500=98.86%, 750=1.13%, 1000=0.01% 00:23:44.237 lat (msec) : 2=0.01%, 10=0.01% 00:23:44.237 cpu : usr=84.46%, sys=13.40%, ctx=29, majf=0, minf=4 00:23:44.237 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:44.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:44.237 issued rwts: total=98760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:44.237 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:44.237 00:23:44.237 Run status group 0 (all jobs): 00:23:44.237 READ: bw=38.6MiB/s (40.4MB/s), 38.6MiB/s-38.6MiB/s (40.4MB/s-40.4MB/s), io=386MiB (405MB), run=10001-10001msec 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 ************************************ 00:23:44.237 END TEST fio_dif_1_default 00:23:44.237 ************************************ 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 00:23:44.237 real 0m10.872s 00:23:44.237 user 0m9.011s 00:23:44.237 sys 0m1.565s 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 22:54:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:44.237 22:54:57 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:44.237 22:54:57 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 ************************************ 00:23:44.237 START TEST fio_dif_1_multi_subsystems 00:23:44.237 ************************************ 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 bdev_null0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 [2024-12-07 22:54:57.222846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 bdev_null1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.237 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:44.238 { 00:23:44.238 "params": { 00:23:44.238 "name": "Nvme$subsystem", 00:23:44.238 "trtype": "$TEST_TRANSPORT", 00:23:44.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.238 "adrfam": "ipv4", 00:23:44.238 "trsvcid": "$NVMF_PORT", 00:23:44.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.238 "hdgst": ${hdgst:-false}, 00:23:44.238 "ddgst": ${ddgst:-false} 00:23:44.238 }, 00:23:44.238 "method": "bdev_nvme_attach_controller" 00:23:44.238 } 00:23:44.238 EOF 00:23:44.238 )") 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:44.238 { 00:23:44.238 "params": { 00:23:44.238 "name": "Nvme$subsystem", 00:23:44.238 "trtype": "$TEST_TRANSPORT", 00:23:44.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.238 "adrfam": "ipv4", 00:23:44.238 "trsvcid": "$NVMF_PORT", 00:23:44.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.238 "hdgst": ${hdgst:-false}, 00:23:44.238 "ddgst": ${ddgst:-false} 00:23:44.238 }, 00:23:44.238 "method": "bdev_nvme_attach_controller" 00:23:44.238 } 00:23:44.238 EOF 00:23:44.238 )") 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:44.238 "params": { 00:23:44.238 "name": "Nvme0", 00:23:44.238 "trtype": "tcp", 00:23:44.238 "traddr": "10.0.0.3", 00:23:44.238 "adrfam": "ipv4", 00:23:44.238 "trsvcid": "4420", 00:23:44.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:44.238 "hdgst": false, 00:23:44.238 "ddgst": false 00:23:44.238 }, 00:23:44.238 "method": "bdev_nvme_attach_controller" 00:23:44.238 },{ 00:23:44.238 "params": { 00:23:44.238 "name": "Nvme1", 00:23:44.238 "trtype": "tcp", 00:23:44.238 "traddr": "10.0.0.3", 00:23:44.238 "adrfam": "ipv4", 00:23:44.238 "trsvcid": "4420", 00:23:44.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.238 "hdgst": false, 00:23:44.238 "ddgst": false 00:23:44.238 }, 00:23:44.238 "method": "bdev_nvme_attach_controller" 00:23:44.238 }' 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:44.238 22:54:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:44.238 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:44.238 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:44.238 fio-3.35 00:23:44.238 Starting 2 threads 00:23:54.216 00:23:54.216 filename0: (groupid=0, jobs=1): err= 0: pid=97365: Sat Dec 7 22:55:08 2024 00:23:54.216 read: IOPS=5363, BW=21.0MiB/s (22.0MB/s)(210MiB/10001msec) 00:23:54.216 slat (nsec): min=6295, max=61196, avg=12765.81, stdev=4680.14 00:23:54.216 clat (usec): min=603, max=1328, avg=710.91, stdev=49.63 00:23:54.216 lat (usec): min=611, max=1355, avg=723.67, stdev=50.15 00:23:54.216 clat percentiles (usec): 00:23:54.216 | 1.00th=[ 627], 5.00th=[ 644], 10.00th=[ 660], 20.00th=[ 668], 00:23:54.216 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 701], 60.00th=[ 717], 00:23:54.216 | 70.00th=[ 725], 80.00th=[ 750], 90.00th=[ 775], 95.00th=[ 807], 00:23:54.216 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 979], 00:23:54.216 | 99.99th=[ 1106] 00:23:54.216 bw ( KiB/s): min=21024, max=22016, per=49.98%, avg=21448.95, stdev=270.83, samples=19 00:23:54.216 iops : min= 5256, max= 5504, avg=5362.21, stdev=67.74, samples=19 00:23:54.216 lat (usec) : 750=81.43%, 1000=18.55% 00:23:54.216 lat (msec) : 2=0.02% 00:23:54.216 cpu : usr=90.15%, sys=8.52%, ctx=17, majf=0, minf=9 00:23:54.216 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:54.216 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.216 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.216 issued rwts: total=53644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.216 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:54.216 filename1: (groupid=0, jobs=1): err= 0: pid=97366: Sat Dec 7 22:55:08 2024 00:23:54.216 read: IOPS=5363, BW=21.0MiB/s (22.0MB/s)(210MiB/10001msec) 00:23:54.216 slat (nsec): min=6290, max=69689, avg=12938.35, stdev=4821.25 00:23:54.216 clat (usec): min=561, max=1300, avg=710.38, stdev=59.29 00:23:54.216 lat (usec): min=571, max=1324, avg=723.31, stdev=60.83 00:23:54.216 clat percentiles (usec): 00:23:54.216 | 1.00th=[ 594], 5.00th=[ 619], 10.00th=[ 635], 20.00th=[ 660], 00:23:54.216 | 30.00th=[ 685], 40.00th=[ 693], 50.00th=[ 709], 60.00th=[ 717], 00:23:54.216 | 70.00th=[ 734], 80.00th=[ 750], 90.00th=[ 783], 95.00th=[ 816], 00:23:54.216 | 99.00th=[ 881], 99.50th=[ 906], 99.90th=[ 971], 99.95th=[ 996], 00:23:54.216 | 99.99th=[ 1156] 00:23:54.217 bw ( KiB/s): min=21024, max=22016, per=49.98%, avg=21448.95, stdev=270.83, samples=19 00:23:54.217 iops : min= 5256, max= 5504, avg=5362.21, stdev=67.74, samples=19 00:23:54.217 lat (usec) : 750=78.49%, 1000=21.47% 00:23:54.217 lat (msec) : 2=0.04% 00:23:54.217 cpu : usr=90.60%, sys=8.01%, ctx=12, majf=0, minf=9 00:23:54.217 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:54.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:54.217 issued rwts: total=53644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:54.217 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:54.217 00:23:54.217 Run status group 0 (all jobs): 00:23:54.217 READ: bw=41.9MiB/s (43.9MB/s), 21.0MiB/s-21.0MiB/s (22.0MB/s-22.0MB/s), io=419MiB (439MB), run=10001-10001msec 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 ************************************ 00:23:54.217 END TEST fio_dif_1_multi_subsystems 00:23:54.217 ************************************ 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 00:23:54.217 real 0m10.983s 00:23:54.217 user 0m18.703s 00:23:54.217 sys 0m1.885s 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 22:55:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:54.217 22:55:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:54.217 22:55:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 ************************************ 00:23:54.217 START TEST fio_dif_rand_params 00:23:54.217 ************************************ 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 bdev_null0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:54.217 [2024-12-07 22:55:08.259655] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:54.217 { 00:23:54.217 "params": { 00:23:54.217 "name": "Nvme$subsystem", 00:23:54.217 "trtype": "$TEST_TRANSPORT", 00:23:54.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:54.217 "adrfam": "ipv4", 00:23:54.217 "trsvcid": "$NVMF_PORT", 00:23:54.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:54.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:54.217 "hdgst": ${hdgst:-false}, 00:23:54.217 "ddgst": ${ddgst:-false} 00:23:54.217 }, 00:23:54.217 "method": "bdev_nvme_attach_controller" 00:23:54.217 } 00:23:54.217 EOF 00:23:54.217 )") 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:54.217 "params": { 00:23:54.217 "name": "Nvme0", 00:23:54.217 "trtype": "tcp", 00:23:54.217 "traddr": "10.0.0.3", 00:23:54.217 "adrfam": "ipv4", 00:23:54.217 "trsvcid": "4420", 00:23:54.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:54.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:54.217 "hdgst": false, 00:23:54.217 "ddgst": false 00:23:54.217 }, 00:23:54.217 "method": "bdev_nvme_attach_controller" 00:23:54.217 }' 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:54.217 22:55:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:54.218 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:54.218 ... 00:23:54.218 fio-3.35 00:23:54.218 Starting 3 threads 00:23:59.493 00:23:59.493 filename0: (groupid=0, jobs=1): err= 0: pid=97522: Sat Dec 7 22:55:13 2024 00:23:59.493 read: IOPS=284, BW=35.6MiB/s (37.3MB/s)(178MiB/5002msec) 00:23:59.493 slat (nsec): min=6467, max=48918, avg=9600.86, stdev=4283.60 00:23:59.493 clat (usec): min=9998, max=13982, avg=10505.26, stdev=364.75 00:23:59.493 lat (usec): min=10005, max=14008, avg=10514.86, stdev=365.01 00:23:59.493 clat percentiles (usec): 00:23:59.493 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10290], 20.00th=[10290], 00:23:59.493 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10421], 60.00th=[10421], 00:23:59.493 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:59.493 | 99.00th=[11731], 99.50th=[11994], 99.90th=[13960], 99.95th=[13960], 00:23:59.493 | 99.99th=[13960] 00:23:59.493 bw ( KiB/s): min=36096, max=36864, per=33.36%, avg=36522.67, stdev=404.77, samples=9 00:23:59.493 iops : min= 282, max= 288, avg=285.33, stdev= 3.16, samples=9 00:23:59.493 lat (msec) : 10=0.07%, 20=99.93% 00:23:59.493 cpu : usr=90.12%, sys=9.32%, ctx=13, majf=0, minf=9 00:23:59.493 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.493 issued rwts: total=1425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.493 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:59.493 filename0: (groupid=0, jobs=1): err= 0: pid=97523: Sat Dec 7 22:55:13 2024 00:23:59.493 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(179MiB/5005msec) 00:23:59.493 slat (nsec): min=6895, max=45736, avg=13652.38, stdev=3961.08 00:23:59.493 clat (usec): min=7236, max=12181, avg=10484.96, stdev=373.57 00:23:59.493 lat (usec): min=7266, max=12207, avg=10498.62, stdev=373.92 00:23:59.493 clat percentiles (usec): 00:23:59.493 | 1.00th=[10159], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:23:59.493 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:23:59.493 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:59.493 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:23:59.493 | 99.99th=[12125] 00:23:59.493 bw ( KiB/s): min=36096, max=37632, per=33.32%, avg=36480.00, stdev=543.06, samples=10 00:23:59.493 iops : min= 282, max= 294, avg=285.00, stdev= 4.24, samples=10 00:23:59.493 lat (msec) : 10=0.42%, 20=99.58% 00:23:59.493 cpu : usr=90.69%, sys=8.81%, ctx=7, majf=0, minf=9 00:23:59.493 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.493 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.493 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:59.493 filename0: (groupid=0, jobs=1): err= 0: pid=97524: Sat Dec 7 22:55:13 2024 00:23:59.493 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(179MiB/5005msec) 00:23:59.493 slat (nsec): min=6720, max=45896, avg=14193.21, stdev=4029.14 00:23:59.493 clat (usec): min=7207, max=12141, avg=10483.18, stdev=373.20 00:23:59.493 lat (usec): min=7234, max=12168, avg=10497.37, stdev=373.41 00:23:59.493 clat percentiles (usec): 00:23:59.493 | 1.00th=[10028], 5.00th=[10159], 10.00th=[10159], 20.00th=[10290], 00:23:59.493 | 30.00th=[10290], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:23:59.493 | 70.00th=[10552], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:23:59.493 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12125], 99.95th=[12125], 00:23:59.493 | 99.99th=[12125] 00:23:59.493 bw ( KiB/s): min=36096, max=37632, per=33.32%, avg=36480.00, stdev=543.06, samples=10 00:23:59.493 iops : min= 282, max= 294, avg=285.00, stdev= 4.24, samples=10 00:23:59.493 lat (msec) : 10=0.42%, 20=99.58% 00:23:59.493 cpu : usr=91.71%, sys=7.79%, ctx=6, majf=0, minf=9 00:23:59.493 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:59.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:59.493 issued rwts: total=1428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:59.493 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:59.493 00:23:59.493 Run status group 0 (all jobs): 00:23:59.493 READ: bw=107MiB/s (112MB/s), 35.6MiB/s-35.7MiB/s (37.3MB/s-37.4MB/s), io=535MiB (561MB), run=5002-5005msec 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.493 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 bdev_null0 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 [2024-12-07 22:55:14.134874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 bdev_null1 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 bdev_null2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:59.494 { 00:23:59.494 "params": { 00:23:59.494 "name": "Nvme$subsystem", 00:23:59.494 "trtype": "$TEST_TRANSPORT", 00:23:59.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.494 "adrfam": "ipv4", 00:23:59.494 "trsvcid": "$NVMF_PORT", 00:23:59.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.494 "hdgst": ${hdgst:-false}, 00:23:59.494 "ddgst": ${ddgst:-false} 00:23:59.494 }, 00:23:59.494 "method": "bdev_nvme_attach_controller" 00:23:59.494 } 00:23:59.494 EOF 00:23:59.494 )") 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:59.494 { 00:23:59.494 "params": { 00:23:59.494 "name": "Nvme$subsystem", 00:23:59.494 "trtype": "$TEST_TRANSPORT", 00:23:59.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.494 "adrfam": "ipv4", 00:23:59.494 "trsvcid": "$NVMF_PORT", 00:23:59.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.494 "hdgst": ${hdgst:-false}, 00:23:59.494 "ddgst": ${ddgst:-false} 00:23:59.494 }, 00:23:59.494 "method": "bdev_nvme_attach_controller" 00:23:59.494 } 00:23:59.494 EOF 00:23:59.494 )") 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:59.494 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:59.494 { 00:23:59.495 "params": { 00:23:59.495 "name": "Nvme$subsystem", 00:23:59.495 "trtype": "$TEST_TRANSPORT", 00:23:59.495 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.495 "adrfam": "ipv4", 00:23:59.495 "trsvcid": "$NVMF_PORT", 00:23:59.495 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.495 "hdgst": ${hdgst:-false}, 00:23:59.495 "ddgst": ${ddgst:-false} 00:23:59.495 }, 00:23:59.495 "method": "bdev_nvme_attach_controller" 00:23:59.495 } 00:23:59.495 EOF 00:23:59.495 )") 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:59.495 "params": { 00:23:59.495 "name": "Nvme0", 00:23:59.495 "trtype": "tcp", 00:23:59.495 "traddr": "10.0.0.3", 00:23:59.495 "adrfam": "ipv4", 00:23:59.495 "trsvcid": "4420", 00:23:59.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:59.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:59.495 "hdgst": false, 00:23:59.495 "ddgst": false 00:23:59.495 }, 00:23:59.495 "method": "bdev_nvme_attach_controller" 00:23:59.495 },{ 00:23:59.495 "params": { 00:23:59.495 "name": "Nvme1", 00:23:59.495 "trtype": "tcp", 00:23:59.495 "traddr": "10.0.0.3", 00:23:59.495 "adrfam": "ipv4", 00:23:59.495 "trsvcid": "4420", 00:23:59.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.495 "hdgst": false, 00:23:59.495 "ddgst": false 00:23:59.495 }, 00:23:59.495 "method": "bdev_nvme_attach_controller" 00:23:59.495 },{ 00:23:59.495 "params": { 00:23:59.495 "name": "Nvme2", 00:23:59.495 "trtype": "tcp", 00:23:59.495 "traddr": "10.0.0.3", 00:23:59.495 "adrfam": "ipv4", 00:23:59.495 "trsvcid": "4420", 00:23:59.495 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.495 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:59.495 "hdgst": false, 00:23:59.495 "ddgst": false 00:23:59.495 }, 00:23:59.495 "method": "bdev_nvme_attach_controller" 00:23:59.495 }' 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:59.495 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:59.754 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:59.754 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:59.754 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:59.754 22:55:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:59.754 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:59.754 ... 00:23:59.754 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:59.754 ... 00:23:59.754 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:59.754 ... 00:23:59.754 fio-3.35 00:23:59.754 Starting 24 threads 00:24:11.948 00:24:11.948 filename0: (groupid=0, jobs=1): err= 0: pid=97618: Sat Dec 7 22:55:25 2024 00:24:11.948 read: IOPS=185, BW=741KiB/s (759kB/s)(7448KiB/10053msec) 00:24:11.948 slat (usec): min=3, max=8025, avg=33.08, stdev=307.72 00:24:11.948 clat (msec): min=2, max=152, avg=86.06, stdev=28.60 00:24:11.948 lat (msec): min=2, max=152, avg=86.09, stdev=28.61 00:24:11.948 clat percentiles (msec): 00:24:11.948 | 1.00th=[ 4], 5.00th=[ 40], 10.00th=[ 52], 20.00th=[ 64], 00:24:11.948 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 99], 00:24:11.948 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 116], 95.00th=[ 121], 00:24:11.948 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:24:11.948 | 99.99th=[ 153] 00:24:11.948 bw ( KiB/s): min= 512, max= 1536, per=4.13%, avg=738.10, stdev=231.27, samples=20 00:24:11.948 iops : min= 128, max= 384, avg=184.50, stdev=57.83, samples=20 00:24:11.948 lat (msec) : 4=1.72%, 10=0.86%, 20=1.13%, 50=5.42%, 100=51.66% 00:24:11.948 lat (msec) : 250=39.21% 00:24:11.948 cpu : usr=45.31%, sys=2.61%, ctx=1177, majf=0, minf=9 00:24:11.948 IO depths : 1=0.1%, 2=2.8%, 4=11.1%, 8=71.4%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:11.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.948 complete : 0=0.0%, 4=90.2%, 8=7.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.948 issued rwts: total=1862,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.948 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.948 filename0: (groupid=0, jobs=1): err= 0: pid=97619: Sat Dec 7 22:55:25 2024 00:24:11.948 read: IOPS=200, BW=803KiB/s (822kB/s)(8072KiB/10053msec) 00:24:11.948 slat (usec): min=6, max=9026, avg=22.36, stdev=282.79 00:24:11.948 clat (usec): min=1460, max=155982, avg=79446.39, stdev=34353.06 00:24:11.948 lat (usec): min=1467, max=155991, avg=79468.75, stdev=34355.11 00:24:11.948 clat percentiles (usec): 00:24:11.948 | 1.00th=[ 1532], 5.00th=[ 1680], 10.00th=[ 8586], 20.00th=[ 60031], 00:24:11.948 | 30.00th=[ 71828], 40.00th=[ 73925], 50.00th=[ 83362], 60.00th=[ 95945], 00:24:11.948 | 70.00th=[106431], 80.00th=[107480], 90.00th=[117965], 95.00th=[120062], 00:24:11.948 | 99.00th=[123208], 99.50th=[131597], 99.90th=[156238], 99.95th=[156238], 00:24:11.948 | 99.99th=[156238] 00:24:11.948 bw ( KiB/s): min= 584, max= 2777, per=4.47%, avg=799.55, stdev=474.34, samples=20 00:24:11.948 iops : min= 146, max= 694, avg=199.85, stdev=118.54, samples=20 00:24:11.948 lat (msec) : 2=6.24%, 4=3.02%, 10=1.83%, 50=4.06%, 100=50.59% 00:24:11.948 lat (msec) : 250=34.24% 00:24:11.948 cpu : usr=33.18%, sys=1.65%, ctx=1174, majf=0, minf=0 00:24:11.948 IO depths : 1=0.3%, 2=2.0%, 4=7.1%, 8=75.1%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:11.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.948 complete : 0=0.0%, 4=89.5%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.948 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.948 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.948 filename0: (groupid=0, jobs=1): err= 0: pid=97620: Sat Dec 7 22:55:25 2024 00:24:11.948 read: IOPS=188, BW=753KiB/s (771kB/s)(7560KiB/10041msec) 00:24:11.948 slat (usec): min=7, max=8024, avg=27.09, stdev=299.32 00:24:11.948 clat (msec): min=12, max=157, avg=84.82, stdev=23.99 00:24:11.948 lat (msec): min=12, max=157, avg=84.85, stdev=23.97 00:24:11.948 clat percentiles (msec): 00:24:11.948 | 1.00th=[ 32], 5.00th=[ 46], 10.00th=[ 52], 20.00th=[ 64], 00:24:11.948 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 96], 00:24:11.948 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 120], 00:24:11.948 | 99.00th=[ 126], 99.50th=[ 133], 99.90th=[ 153], 99.95th=[ 159], 00:24:11.948 | 99.99th=[ 159] 00:24:11.948 bw ( KiB/s): min= 608, max= 1187, per=4.19%, avg=749.25, stdev=142.61, samples=20 00:24:11.948 iops : min= 152, max= 296, avg=187.25, stdev=35.50, samples=20 00:24:11.948 lat (msec) : 20=0.74%, 50=8.10%, 100=55.71%, 250=35.45% 00:24:11.948 cpu : usr=39.86%, sys=2.27%, ctx=1164, majf=0, minf=9 00:24:11.948 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:11.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.948 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.948 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.948 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.948 filename0: (groupid=0, jobs=1): err= 0: pid=97621: Sat Dec 7 22:55:25 2024 00:24:11.948 read: IOPS=187, BW=752KiB/s (770kB/s)(7552KiB/10043msec) 00:24:11.948 slat (usec): min=8, max=8044, avg=30.40, stdev=272.25 00:24:11.948 clat (msec): min=9, max=143, avg=84.88, stdev=23.17 00:24:11.948 lat (msec): min=9, max=143, avg=84.91, stdev=23.16 00:24:11.948 clat percentiles (msec): 00:24:11.948 | 1.00th=[ 33], 5.00th=[ 49], 10.00th=[ 58], 20.00th=[ 65], 00:24:11.949 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 82], 60.00th=[ 94], 00:24:11.949 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 118], 00:24:11.949 | 99.00th=[ 125], 99.50th=[ 136], 99.90th=[ 142], 99.95th=[ 144], 00:24:11.949 | 99.99th=[ 144] 00:24:11.949 bw ( KiB/s): min= 616, max= 1035, per=4.19%, avg=748.45, stdev=126.47, samples=20 00:24:11.949 iops : min= 154, max= 258, avg=187.05, stdev=31.50, samples=20 00:24:11.949 lat (msec) : 10=0.11%, 20=0.74%, 50=5.08%, 100=59.64%, 250=34.43% 00:24:11.949 cpu : usr=40.22%, sys=2.10%, ctx=1335, majf=0, minf=9 00:24:11.949 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=77.7%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=88.6%, 8=10.2%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.949 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.949 filename0: (groupid=0, jobs=1): err= 0: pid=97622: Sat Dec 7 22:55:25 2024 00:24:11.949 read: IOPS=182, BW=729KiB/s (746kB/s)(7308KiB/10028msec) 00:24:11.949 slat (usec): min=8, max=8028, avg=19.59, stdev=187.55 00:24:11.949 clat (msec): min=35, max=145, avg=87.67, stdev=21.34 00:24:11.949 lat (msec): min=35, max=145, avg=87.69, stdev=21.34 00:24:11.949 clat percentiles (msec): 00:24:11.949 | 1.00th=[ 46], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 72], 00:24:11.949 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 96], 00:24:11.949 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:24:11.949 | 99.00th=[ 134], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 146], 00:24:11.949 | 99.99th=[ 146] 00:24:11.949 bw ( KiB/s): min= 592, max= 864, per=4.05%, avg=724.30, stdev=78.97, samples=20 00:24:11.949 iops : min= 148, max= 216, avg=181.05, stdev=19.72, samples=20 00:24:11.949 lat (msec) : 50=3.94%, 100=60.92%, 250=35.14% 00:24:11.949 cpu : usr=31.38%, sys=1.80%, ctx=861, majf=0, minf=9 00:24:11.949 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=77.8%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=88.7%, 8=10.2%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 issued rwts: total=1827,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.949 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.949 filename0: (groupid=0, jobs=1): err= 0: pid=97623: Sat Dec 7 22:55:25 2024 00:24:11.949 read: IOPS=190, BW=760KiB/s (778kB/s)(7612KiB/10013msec) 00:24:11.949 slat (usec): min=4, max=4026, avg=26.56, stdev=205.37 00:24:11.949 clat (msec): min=12, max=141, avg=84.07, stdev=22.26 00:24:11.949 lat (msec): min=12, max=141, avg=84.10, stdev=22.27 00:24:11.949 clat percentiles (msec): 00:24:11.949 | 1.00th=[ 42], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 67], 00:24:11.949 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 86], 00:24:11.949 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 118], 00:24:11.949 | 99.00th=[ 123], 99.50th=[ 129], 99.90th=[ 142], 99.95th=[ 142], 00:24:11.949 | 99.99th=[ 142] 00:24:11.949 bw ( KiB/s): min= 664, max= 968, per=4.17%, avg=745.21, stdev=88.75, samples=19 00:24:11.949 iops : min= 166, max= 242, avg=186.21, stdev=22.19, samples=19 00:24:11.949 lat (msec) : 20=0.53%, 50=4.78%, 100=62.69%, 250=32.00% 00:24:11.949 cpu : usr=43.41%, sys=2.48%, ctx=1135, majf=0, minf=9 00:24:11.949 IO depths : 1=0.1%, 2=1.2%, 4=4.6%, 8=79.1%, 16=15.1%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 issued rwts: total=1903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.949 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.949 filename0: (groupid=0, jobs=1): err= 0: pid=97624: Sat Dec 7 22:55:25 2024 00:24:11.949 read: IOPS=186, BW=745KiB/s (763kB/s)(7464KiB/10020msec) 00:24:11.949 slat (usec): min=3, max=8024, avg=19.72, stdev=185.48 00:24:11.949 clat (msec): min=21, max=144, avg=85.77, stdev=21.89 00:24:11.949 lat (msec): min=21, max=144, avg=85.79, stdev=21.89 00:24:11.949 clat percentiles (msec): 00:24:11.949 | 1.00th=[ 39], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 69], 00:24:11.949 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 94], 00:24:11.949 | 70.00th=[ 108], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:24:11.949 | 99.00th=[ 123], 99.50th=[ 133], 99.90th=[ 136], 99.95th=[ 144], 00:24:11.949 | 99.99th=[ 144] 00:24:11.949 bw ( KiB/s): min= 616, max= 952, per=4.14%, avg=740.00, stdev=98.58, samples=20 00:24:11.949 iops : min= 154, max= 238, avg=185.00, stdev=24.64, samples=20 00:24:11.949 lat (msec) : 50=5.14%, 100=62.33%, 250=32.53% 00:24:11.949 cpu : usr=31.62%, sys=1.89%, ctx=1114, majf=0, minf=9 00:24:11.949 IO depths : 1=0.1%, 2=1.4%, 4=5.5%, 8=77.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 issued rwts: total=1866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.949 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.949 filename0: (groupid=0, jobs=1): err= 0: pid=97625: Sat Dec 7 22:55:25 2024 00:24:11.949 read: IOPS=193, BW=775KiB/s (794kB/s)(7760KiB/10011msec) 00:24:11.949 slat (usec): min=3, max=8032, avg=26.92, stdev=250.70 00:24:11.949 clat (msec): min=21, max=130, avg=82.42, stdev=22.23 00:24:11.949 lat (msec): min=21, max=130, avg=82.44, stdev=22.22 00:24:11.949 clat percentiles (msec): 00:24:11.949 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 65], 00:24:11.949 | 30.00th=[ 71], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 85], 00:24:11.949 | 70.00th=[ 100], 80.00th=[ 108], 90.00th=[ 113], 95.00th=[ 117], 00:24:11.949 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 131], 00:24:11.949 | 99.99th=[ 131] 00:24:11.949 bw ( KiB/s): min= 664, max= 1017, per=4.26%, avg=762.37, stdev=103.32, samples=19 00:24:11.949 iops : min= 166, max= 254, avg=190.53, stdev=25.79, samples=19 00:24:11.949 lat (msec) : 50=7.58%, 100=63.30%, 250=29.12% 00:24:11.949 cpu : usr=40.55%, sys=2.38%, ctx=1443, majf=0, minf=9 00:24:11.949 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.2%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.949 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.949 filename1: (groupid=0, jobs=1): err= 0: pid=97626: Sat Dec 7 22:55:25 2024 00:24:11.949 read: IOPS=181, BW=725KiB/s (743kB/s)(7280KiB/10040msec) 00:24:11.949 slat (usec): min=4, max=12037, avg=21.13, stdev=281.87 00:24:11.949 clat (msec): min=38, max=156, avg=88.08, stdev=21.33 00:24:11.949 lat (msec): min=38, max=156, avg=88.10, stdev=21.33 00:24:11.949 clat percentiles (msec): 00:24:11.949 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 62], 20.00th=[ 71], 00:24:11.949 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 84], 60.00th=[ 97], 00:24:11.949 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 117], 95.00th=[ 121], 00:24:11.949 | 99.00th=[ 122], 99.50th=[ 136], 99.90th=[ 155], 99.95th=[ 157], 00:24:11.949 | 99.99th=[ 157] 00:24:11.949 bw ( KiB/s): min= 616, max= 896, per=4.03%, avg=721.50, stdev=95.50, samples=20 00:24:11.949 iops : min= 154, max= 224, avg=180.35, stdev=23.85, samples=20 00:24:11.949 lat (msec) : 50=3.30%, 100=59.34%, 250=37.36% 00:24:11.949 cpu : usr=34.44%, sys=1.98%, ctx=991, majf=0, minf=10 00:24:11.949 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=88.5%, 8=10.7%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 issued rwts: total=1820,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.949 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.949 filename1: (groupid=0, jobs=1): err= 0: pid=97627: Sat Dec 7 22:55:25 2024 00:24:11.949 read: IOPS=182, BW=729KiB/s (747kB/s)(7324KiB/10041msec) 00:24:11.949 slat (usec): min=6, max=10042, avg=31.21, stdev=399.86 00:24:11.949 clat (msec): min=12, max=146, avg=87.52, stdev=23.89 00:24:11.949 lat (msec): min=12, max=146, avg=87.55, stdev=23.89 00:24:11.949 clat percentiles (msec): 00:24:11.949 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 68], 00:24:11.949 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 100], 00:24:11.949 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:24:11.949 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:24:11.949 | 99.99th=[ 146] 00:24:11.949 bw ( KiB/s): min= 584, max= 1019, per=4.06%, avg=725.65, stdev=120.04, samples=20 00:24:11.949 iops : min= 146, max= 254, avg=181.35, stdev=29.89, samples=20 00:24:11.949 lat (msec) : 20=0.76%, 50=6.66%, 100=53.19%, 250=39.38% 00:24:11.949 cpu : usr=31.61%, sys=1.91%, ctx=1106, majf=0, minf=9 00:24:11.949 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=79.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 issued rwts: total=1831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.949 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.949 filename1: (groupid=0, jobs=1): err= 0: pid=97628: Sat Dec 7 22:55:25 2024 00:24:11.949 read: IOPS=188, BW=755KiB/s (773kB/s)(7564KiB/10024msec) 00:24:11.949 slat (usec): min=3, max=2655, avg=16.61, stdev=60.94 00:24:11.949 clat (msec): min=32, max=144, avg=84.68, stdev=21.62 00:24:11.949 lat (msec): min=32, max=144, avg=84.69, stdev=21.62 00:24:11.949 clat percentiles (msec): 00:24:11.949 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 60], 20.00th=[ 68], 00:24:11.949 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:24:11.949 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 118], 00:24:11.949 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 146], 00:24:11.949 | 99.99th=[ 146] 00:24:11.949 bw ( KiB/s): min= 640, max= 934, per=4.19%, avg=749.90, stdev=88.37, samples=20 00:24:11.949 iops : min= 160, max= 233, avg=187.45, stdev=22.04, samples=20 00:24:11.949 lat (msec) : 50=5.13%, 100=61.98%, 250=32.89% 00:24:11.949 cpu : usr=43.34%, sys=2.33%, ctx=1370, majf=0, minf=9 00:24:11.949 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:11.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.949 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=1891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename1: (groupid=0, jobs=1): err= 0: pid=97629: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=185, BW=740KiB/s (758kB/s)(7424KiB/10030msec) 00:24:11.950 slat (usec): min=4, max=4024, avg=16.34, stdev=93.22 00:24:11.950 clat (msec): min=30, max=155, avg=86.33, stdev=24.63 00:24:11.950 lat (msec): min=30, max=155, avg=86.35, stdev=24.62 00:24:11.950 clat percentiles (msec): 00:24:11.950 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 63], 00:24:11.950 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 86], 60.00th=[ 100], 00:24:11.950 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 121], 00:24:11.950 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 155], 00:24:11.950 | 99.99th=[ 155] 00:24:11.950 bw ( KiB/s): min= 576, max= 1040, per=4.11%, avg=735.85, stdev=144.78, samples=20 00:24:11.950 iops : min= 144, max= 260, avg=183.95, stdev=36.17, samples=20 00:24:11.950 lat (msec) : 50=8.84%, 100=51.99%, 250=39.17% 00:24:11.950 cpu : usr=34.48%, sys=2.06%, ctx=1289, majf=0, minf=9 00:24:11.950 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.5%, 16=16.7%, 32=0.0%, >=64=0.0% 00:24:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename1: (groupid=0, jobs=1): err= 0: pid=97630: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=166, BW=667KiB/s (683kB/s)(6684KiB/10028msec) 00:24:11.950 slat (usec): min=4, max=6035, avg=20.50, stdev=177.00 00:24:11.950 clat (msec): min=35, max=165, avg=95.82, stdev=26.31 00:24:11.950 lat (msec): min=36, max=165, avg=95.84, stdev=26.31 00:24:11.950 clat percentiles (msec): 00:24:11.950 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 72], 00:24:11.950 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 99], 60.00th=[ 108], 00:24:11.950 | 70.00th=[ 110], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 144], 00:24:11.950 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 167], 99.95th=[ 167], 00:24:11.950 | 99.99th=[ 167] 00:24:11.950 bw ( KiB/s): min= 496, max= 968, per=3.72%, avg=664.35, stdev=148.24, samples=20 00:24:11.950 iops : min= 124, max= 242, avg=166.05, stdev=37.01, samples=20 00:24:11.950 lat (msec) : 50=3.77%, 100=47.34%, 250=48.89% 00:24:11.950 cpu : usr=35.31%, sys=1.96%, ctx=1019, majf=0, minf=9 00:24:11.950 IO depths : 1=0.1%, 2=3.5%, 4=14.1%, 8=68.1%, 16=14.2%, 32=0.0%, >=64=0.0% 00:24:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 complete : 0=0.0%, 4=91.2%, 8=5.7%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=1671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename1: (groupid=0, jobs=1): err= 0: pid=97631: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=185, BW=742KiB/s (760kB/s)(7436KiB/10016msec) 00:24:11.950 slat (usec): min=5, max=8031, avg=19.27, stdev=185.98 00:24:11.950 clat (msec): min=21, max=144, avg=86.08, stdev=22.42 00:24:11.950 lat (msec): min=21, max=144, avg=86.10, stdev=22.42 00:24:11.950 clat percentiles (msec): 00:24:11.950 | 1.00th=[ 44], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 71], 00:24:11.950 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 95], 00:24:11.950 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:24:11.950 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:11.950 | 99.99th=[ 144] 00:24:11.950 bw ( KiB/s): min= 632, max= 1024, per=4.14%, avg=740.00, stdev=112.70, samples=20 00:24:11.950 iops : min= 158, max= 256, avg=185.00, stdev=28.18, samples=20 00:24:11.950 lat (msec) : 50=6.99%, 100=59.82%, 250=33.19% 00:24:11.950 cpu : usr=31.26%, sys=1.93%, ctx=852, majf=0, minf=9 00:24:11.950 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=78.1%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 complete : 0=0.0%, 4=88.3%, 8=10.5%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=1859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename1: (groupid=0, jobs=1): err= 0: pid=97632: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=169, BW=677KiB/s (693kB/s)(6780KiB/10021msec) 00:24:11.950 slat (usec): min=3, max=4032, avg=19.55, stdev=137.88 00:24:11.950 clat (msec): min=38, max=156, avg=94.45, stdev=28.14 00:24:11.950 lat (msec): min=38, max=156, avg=94.47, stdev=28.13 00:24:11.950 clat percentiles (msec): 00:24:11.950 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 57], 20.00th=[ 69], 00:24:11.950 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 97], 60.00th=[ 108], 00:24:11.950 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 136], 95.00th=[ 146], 00:24:11.950 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:24:11.950 | 99.99th=[ 157] 00:24:11.950 bw ( KiB/s): min= 512, max= 976, per=3.75%, avg=671.50, stdev=157.37, samples=20 00:24:11.950 iops : min= 128, max= 244, avg=167.85, stdev=39.32, samples=20 00:24:11.950 lat (msec) : 50=5.60%, 100=47.08%, 250=47.32% 00:24:11.950 cpu : usr=42.95%, sys=2.27%, ctx=1247, majf=0, minf=9 00:24:11.950 IO depths : 1=0.1%, 2=3.8%, 4=15.0%, 8=67.3%, 16=13.8%, 32=0.0%, >=64=0.0% 00:24:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 complete : 0=0.0%, 4=91.3%, 8=5.4%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=1695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename1: (groupid=0, jobs=1): err= 0: pid=97633: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=187, BW=749KiB/s (767kB/s)(7520KiB/10042msec) 00:24:11.950 slat (usec): min=6, max=4023, avg=18.28, stdev=130.78 00:24:11.950 clat (msec): min=12, max=147, avg=85.25, stdev=22.95 00:24:11.950 lat (msec): min=12, max=147, avg=85.27, stdev=22.95 00:24:11.950 clat percentiles (msec): 00:24:11.950 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 67], 00:24:11.950 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 95], 00:24:11.950 | 70.00th=[ 105], 80.00th=[ 110], 90.00th=[ 115], 95.00th=[ 120], 00:24:11.950 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 148], 00:24:11.950 | 99.99th=[ 148] 00:24:11.950 bw ( KiB/s): min= 640, max= 1019, per=4.18%, avg=747.65, stdev=119.16, samples=20 00:24:11.950 iops : min= 160, max= 254, avg=186.85, stdev=29.66, samples=20 00:24:11.950 lat (msec) : 20=0.74%, 50=6.22%, 100=59.31%, 250=33.72% 00:24:11.950 cpu : usr=39.02%, sys=1.99%, ctx=1158, majf=0, minf=9 00:24:11.950 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 complete : 0=0.0%, 4=87.9%, 8=11.6%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename2: (groupid=0, jobs=1): err= 0: pid=97634: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=200, BW=804KiB/s (823kB/s)(8040KiB/10002msec) 00:24:11.950 slat (usec): min=6, max=12025, avg=31.42, stdev=361.93 00:24:11.950 clat (usec): min=1482, max=145092, avg=79499.83, stdev=28069.79 00:24:11.950 lat (usec): min=1490, max=145126, avg=79531.25, stdev=28074.16 00:24:11.950 clat percentiles (usec): 00:24:11.950 | 1.00th=[ 1745], 5.00th=[ 12518], 10.00th=[ 47973], 20.00th=[ 60031], 00:24:11.950 | 30.00th=[ 69731], 40.00th=[ 71828], 50.00th=[ 78119], 60.00th=[ 83362], 00:24:11.950 | 70.00th=[100140], 80.00th=[107480], 90.00th=[113771], 95.00th=[117965], 00:24:11.950 | 99.00th=[122160], 99.50th=[124257], 99.90th=[130548], 99.95th=[145753], 00:24:11.950 | 99.99th=[145753] 00:24:11.950 bw ( KiB/s): min= 640, max= 1048, per=4.20%, avg=751.47, stdev=117.88, samples=19 00:24:11.950 iops : min= 160, max= 262, avg=187.84, stdev=29.49, samples=19 00:24:11.950 lat (msec) : 2=1.94%, 4=1.14%, 10=1.59%, 20=0.35%, 50=7.06% 00:24:11.950 lat (msec) : 100=57.16%, 250=30.75% 00:24:11.950 cpu : usr=40.51%, sys=2.36%, ctx=1413, majf=0, minf=9 00:24:11.950 IO depths : 1=0.1%, 2=0.9%, 4=3.3%, 8=80.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:24:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename2: (groupid=0, jobs=1): err= 0: pid=97635: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=190, BW=761KiB/s (780kB/s)(7620KiB/10007msec) 00:24:11.950 slat (usec): min=8, max=4035, avg=18.10, stdev=92.24 00:24:11.950 clat (msec): min=12, max=146, avg=83.96, stdev=22.37 00:24:11.950 lat (msec): min=12, max=146, avg=83.98, stdev=22.37 00:24:11.950 clat percentiles (msec): 00:24:11.950 | 1.00th=[ 39], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 66], 00:24:11.950 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:24:11.950 | 70.00th=[ 105], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 117], 00:24:11.950 | 99.00th=[ 123], 99.50th=[ 127], 99.90th=[ 146], 99.95th=[ 146], 00:24:11.950 | 99.99th=[ 146] 00:24:11.950 bw ( KiB/s): min= 640, max= 936, per=4.17%, avg=746.58, stdev=94.14, samples=19 00:24:11.950 iops : min= 160, max= 234, avg=186.63, stdev=23.53, samples=19 00:24:11.950 lat (msec) : 20=0.52%, 50=5.62%, 100=61.15%, 250=32.70% 00:24:11.950 cpu : usr=41.17%, sys=2.53%, ctx=1310, majf=0, minf=9 00:24:11.950 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:11.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.950 issued rwts: total=1905,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.950 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.950 filename2: (groupid=0, jobs=1): err= 0: pid=97636: Sat Dec 7 22:55:25 2024 00:24:11.950 read: IOPS=188, BW=755KiB/s (773kB/s)(7572KiB/10031msec) 00:24:11.950 slat (usec): min=8, max=8025, avg=18.87, stdev=184.21 00:24:11.950 clat (msec): min=21, max=143, avg=84.62, stdev=23.17 00:24:11.950 lat (msec): min=21, max=143, avg=84.64, stdev=23.18 00:24:11.950 clat percentiles (msec): 00:24:11.950 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 69], 00:24:11.950 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 84], 60.00th=[ 96], 00:24:11.950 | 70.00th=[ 107], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 121], 00:24:11.950 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 144], 99.95th=[ 144], 00:24:11.951 | 99.99th=[ 144] 00:24:11.951 bw ( KiB/s): min= 592, max= 1115, per=4.21%, avg=752.85, stdev=132.76, samples=20 00:24:11.951 iops : min= 148, max= 278, avg=188.15, stdev=33.04, samples=20 00:24:11.951 lat (msec) : 50=8.08%, 100=59.48%, 250=32.44% 00:24:11.951 cpu : usr=31.59%, sys=1.62%, ctx=857, majf=0, minf=9 00:24:11.951 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.5%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:11.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.951 filename2: (groupid=0, jobs=1): err= 0: pid=97637: Sat Dec 7 22:55:25 2024 00:24:11.951 read: IOPS=167, BW=670KiB/s (686kB/s)(6720KiB/10027msec) 00:24:11.951 slat (usec): min=4, max=8026, avg=26.25, stdev=293.09 00:24:11.951 clat (msec): min=37, max=158, avg=95.23, stdev=27.27 00:24:11.951 lat (msec): min=37, max=158, avg=95.25, stdev=27.28 00:24:11.951 clat percentiles (msec): 00:24:11.951 | 1.00th=[ 47], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 71], 00:24:11.951 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 101], 60.00th=[ 107], 00:24:11.951 | 70.00th=[ 110], 80.00th=[ 117], 90.00th=[ 134], 95.00th=[ 146], 00:24:11.951 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:24:11.951 | 99.99th=[ 159] 00:24:11.951 bw ( KiB/s): min= 384, max= 952, per=3.73%, avg=667.90, stdev=157.91, samples=20 00:24:11.951 iops : min= 96, max= 238, avg=166.95, stdev=39.44, samples=20 00:24:11.951 lat (msec) : 50=4.88%, 100=45.18%, 250=49.94% 00:24:11.951 cpu : usr=36.54%, sys=2.16%, ctx=1124, majf=0, minf=9 00:24:11.951 IO depths : 1=0.1%, 2=3.5%, 4=13.9%, 8=68.3%, 16=14.2%, 32=0.0%, >=64=0.0% 00:24:11.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 complete : 0=0.0%, 4=91.2%, 8=5.7%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 issued rwts: total=1680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.951 filename2: (groupid=0, jobs=1): err= 0: pid=97638: Sat Dec 7 22:55:25 2024 00:24:11.951 read: IOPS=193, BW=773KiB/s (792kB/s)(7736KiB/10004msec) 00:24:11.951 slat (usec): min=3, max=4026, avg=16.83, stdev=91.37 00:24:11.951 clat (msec): min=4, max=155, avg=82.68, stdev=25.14 00:24:11.951 lat (msec): min=4, max=155, avg=82.70, stdev=25.14 00:24:11.951 clat percentiles (msec): 00:24:11.951 | 1.00th=[ 7], 5.00th=[ 47], 10.00th=[ 52], 20.00th=[ 64], 00:24:11.951 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 80], 60.00th=[ 85], 00:24:11.951 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 121], 00:24:11.951 | 99.00th=[ 131], 99.50th=[ 131], 99.90th=[ 157], 99.95th=[ 157], 00:24:11.951 | 99.99th=[ 157] 00:24:11.951 bw ( KiB/s): min= 640, max= 976, per=4.17%, avg=745.32, stdev=98.42, samples=19 00:24:11.951 iops : min= 160, max= 244, avg=186.32, stdev=24.59, samples=19 00:24:11.951 lat (msec) : 10=1.65%, 20=0.31%, 50=7.45%, 100=58.84%, 250=31.75% 00:24:11.951 cpu : usr=35.17%, sys=1.70%, ctx=1307, majf=0, minf=9 00:24:11.951 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=80.4%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:11.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 complete : 0=0.0%, 4=87.7%, 8=11.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 issued rwts: total=1934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.951 filename2: (groupid=0, jobs=1): err= 0: pid=97639: Sat Dec 7 22:55:25 2024 00:24:11.951 read: IOPS=194, BW=777KiB/s (796kB/s)(7780KiB/10007msec) 00:24:11.951 slat (usec): min=3, max=8031, avg=27.99, stdev=314.58 00:24:11.951 clat (msec): min=6, max=166, avg=82.21, stdev=23.62 00:24:11.951 lat (msec): min=6, max=166, avg=82.23, stdev=23.62 00:24:11.951 clat percentiles (msec): 00:24:11.951 | 1.00th=[ 26], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 62], 00:24:11.951 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:24:11.951 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 121], 00:24:11.951 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 167], 99.95th=[ 167], 00:24:11.951 | 99.99th=[ 167] 00:24:11.951 bw ( KiB/s): min= 656, max= 1024, per=4.24%, avg=758.84, stdev=106.49, samples=19 00:24:11.951 iops : min= 164, max= 256, avg=189.68, stdev=26.59, samples=19 00:24:11.951 lat (msec) : 10=0.21%, 20=0.51%, 50=10.08%, 100=59.74%, 250=29.46% 00:24:11.951 cpu : usr=33.75%, sys=1.71%, ctx=913, majf=0, minf=9 00:24:11.951 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=83.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:11.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 complete : 0=0.0%, 4=86.9%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.951 filename2: (groupid=0, jobs=1): err= 0: pid=97640: Sat Dec 7 22:55:25 2024 00:24:11.951 read: IOPS=194, BW=776KiB/s (795kB/s)(7764KiB/10002msec) 00:24:11.951 slat (usec): min=4, max=8026, avg=25.98, stdev=250.02 00:24:11.951 clat (msec): min=2, max=144, avg=82.34, stdev=25.62 00:24:11.951 lat (msec): min=2, max=144, avg=82.37, stdev=25.62 00:24:11.951 clat percentiles (msec): 00:24:11.951 | 1.00th=[ 5], 5.00th=[ 47], 10.00th=[ 54], 20.00th=[ 64], 00:24:11.951 | 30.00th=[ 70], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 87], 00:24:11.951 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 115], 95.00th=[ 120], 00:24:11.951 | 99.00th=[ 129], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:24:11.951 | 99.99th=[ 144] 00:24:11.951 bw ( KiB/s): min= 592, max= 968, per=4.16%, avg=744.21, stdev=97.91, samples=19 00:24:11.951 iops : min= 148, max= 242, avg=186.05, stdev=24.48, samples=19 00:24:11.951 lat (msec) : 4=0.82%, 10=1.34%, 20=0.62%, 50=5.31%, 100=59.76% 00:24:11.951 lat (msec) : 250=32.15% 00:24:11.951 cpu : usr=39.82%, sys=2.19%, ctx=1303, majf=0, minf=9 00:24:11.951 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=79.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:24:11.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 complete : 0=0.0%, 4=88.0%, 8=11.0%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 issued rwts: total=1941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.951 filename2: (groupid=0, jobs=1): err= 0: pid=97641: Sat Dec 7 22:55:25 2024 00:24:11.951 read: IOPS=191, BW=765KiB/s (783kB/s)(7656KiB/10013msec) 00:24:11.951 slat (usec): min=4, max=6689, avg=23.30, stdev=201.47 00:24:11.951 clat (msec): min=19, max=147, avg=83.58, stdev=22.47 00:24:11.951 lat (msec): min=19, max=147, avg=83.60, stdev=22.47 00:24:11.951 clat percentiles (msec): 00:24:11.951 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 64], 00:24:11.951 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 86], 00:24:11.951 | 70.00th=[ 103], 80.00th=[ 108], 90.00th=[ 114], 95.00th=[ 118], 00:24:11.951 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 148], 00:24:11.951 | 99.99th=[ 148] 00:24:11.951 bw ( KiB/s): min= 616, max= 994, per=4.22%, avg=754.37, stdev=110.68, samples=19 00:24:11.951 iops : min= 154, max= 248, avg=188.53, stdev=27.62, samples=19 00:24:11.951 lat (msec) : 20=0.37%, 50=7.21%, 100=61.23%, 250=31.19% 00:24:11.951 cpu : usr=36.21%, sys=2.26%, ctx=1051, majf=0, minf=10 00:24:11.951 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:11.951 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.951 issued rwts: total=1914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.951 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:11.951 00:24:11.951 Run status group 0 (all jobs): 00:24:11.951 READ: bw=17.5MiB/s (18.3MB/s), 667KiB/s-804KiB/s (683kB/s-823kB/s), io=175MiB (184MB), run=10002-10053msec 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.951 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 bdev_null0 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 [2024-12-07 22:55:25.319034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 bdev_null1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:11.952 { 00:24:11.952 "params": { 00:24:11.952 "name": "Nvme$subsystem", 00:24:11.952 "trtype": "$TEST_TRANSPORT", 00:24:11.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.952 "adrfam": "ipv4", 00:24:11.952 "trsvcid": "$NVMF_PORT", 00:24:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.952 "hdgst": ${hdgst:-false}, 00:24:11.952 "ddgst": ${ddgst:-false} 00:24:11.952 }, 00:24:11.952 "method": "bdev_nvme_attach_controller" 00:24:11.952 } 00:24:11.952 EOF 00:24:11.952 )") 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:11.952 { 00:24:11.952 "params": { 00:24:11.952 "name": "Nvme$subsystem", 00:24:11.952 "trtype": "$TEST_TRANSPORT", 00:24:11.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.952 "adrfam": "ipv4", 00:24:11.952 "trsvcid": "$NVMF_PORT", 00:24:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.952 "hdgst": ${hdgst:-false}, 00:24:11.952 "ddgst": ${ddgst:-false} 00:24:11.952 }, 00:24:11.952 "method": "bdev_nvme_attach_controller" 00:24:11.952 } 00:24:11.952 EOF 00:24:11.952 )") 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:11.952 "params": { 00:24:11.952 "name": "Nvme0", 00:24:11.952 "trtype": "tcp", 00:24:11.952 "traddr": "10.0.0.3", 00:24:11.952 "adrfam": "ipv4", 00:24:11.952 "trsvcid": "4420", 00:24:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.952 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:11.952 "hdgst": false, 00:24:11.952 "ddgst": false 00:24:11.952 }, 00:24:11.952 "method": "bdev_nvme_attach_controller" 00:24:11.952 },{ 00:24:11.952 "params": { 00:24:11.952 "name": "Nvme1", 00:24:11.952 "trtype": "tcp", 00:24:11.952 "traddr": "10.0.0.3", 00:24:11.952 "adrfam": "ipv4", 00:24:11.952 "trsvcid": "4420", 00:24:11.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:11.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:11.952 "hdgst": false, 00:24:11.952 "ddgst": false 00:24:11.952 }, 00:24:11.952 "method": "bdev_nvme_attach_controller" 00:24:11.952 }' 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:11.952 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:11.953 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:11.953 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:11.953 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:11.953 22:55:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.953 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:11.953 ... 00:24:11.953 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:11.953 ... 00:24:11.953 fio-3.35 00:24:11.953 Starting 4 threads 00:24:17.224 00:24:17.225 filename0: (groupid=0, jobs=1): err= 0: pid=97771: Sat Dec 7 22:55:31 2024 00:24:17.225 read: IOPS=2401, BW=18.8MiB/s (19.7MB/s)(93.9MiB/5003msec) 00:24:17.225 slat (nsec): min=3265, max=51808, avg=10691.62, stdev=4669.48 00:24:17.225 clat (usec): min=586, max=6517, avg=3299.27, stdev=971.28 00:24:17.225 lat (usec): min=594, max=6532, avg=3309.96, stdev=971.83 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 1254], 5.00th=[ 1319], 10.00th=[ 1369], 20.00th=[ 2737], 00:24:17.225 | 30.00th=[ 3064], 40.00th=[ 3490], 50.00th=[ 3752], 60.00th=[ 3851], 00:24:17.225 | 70.00th=[ 3916], 80.00th=[ 3982], 90.00th=[ 4113], 95.00th=[ 4293], 00:24:17.225 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5145], 99.95th=[ 5211], 00:24:17.225 | 99.99th=[ 6063] 00:24:17.225 bw ( KiB/s): min=15647, max=22320, per=28.00%, avg=19493.22, stdev=2721.19, samples=9 00:24:17.225 iops : min= 1955, max= 2790, avg=2436.56, stdev=340.30, samples=9 00:24:17.225 lat (usec) : 750=0.15% 00:24:17.225 lat (msec) : 2=16.00%, 4=66.88%, 10=16.98% 00:24:17.225 cpu : usr=91.00%, sys=8.12%, ctx=4, majf=0, minf=9 00:24:17.225 IO depths : 1=0.1%, 2=7.8%, 4=60.4%, 8=31.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=97.1%, 8=2.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 issued rwts: total=12016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:17.225 filename0: (groupid=0, jobs=1): err= 0: pid=97772: Sat Dec 7 22:55:31 2024 00:24:17.225 read: IOPS=2016, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5003msec) 00:24:17.225 slat (nsec): min=3532, max=68341, avg=15315.83, stdev=4722.93 00:24:17.225 clat (usec): min=1388, max=6108, avg=3909.74, stdev=359.34 00:24:17.225 lat (usec): min=1401, max=6121, avg=3925.05, stdev=359.40 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 2999], 5.00th=[ 3130], 10.00th=[ 3523], 20.00th=[ 3818], 00:24:17.225 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:24:17.225 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:24:17.225 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5342], 00:24:17.225 | 99.99th=[ 5604] 00:24:17.225 bw ( KiB/s): min=15616, max=17376, per=23.14%, avg=16113.78, stdev=577.36, samples=9 00:24:17.225 iops : min= 1952, max= 2172, avg=2014.22, stdev=72.17, samples=9 00:24:17.225 lat (msec) : 2=0.33%, 4=70.17%, 10=29.50% 00:24:17.225 cpu : usr=91.52%, sys=7.70%, ctx=7, majf=0, minf=9 00:24:17.225 IO depths : 1=0.1%, 2=21.5%, 4=52.9%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 issued rwts: total=10090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:17.225 filename1: (groupid=0, jobs=1): err= 0: pid=97773: Sat Dec 7 22:55:31 2024 00:24:17.225 read: IOPS=2016, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5003msec) 00:24:17.225 slat (nsec): min=3152, max=63966, avg=15607.38, stdev=4821.81 00:24:17.225 clat (usec): min=1381, max=6106, avg=3907.97, stdev=359.37 00:24:17.225 lat (usec): min=1395, max=6120, avg=3923.58, stdev=359.36 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 2999], 5.00th=[ 3130], 10.00th=[ 3523], 20.00th=[ 3818], 00:24:17.225 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:24:17.225 | 70.00th=[ 3982], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4424], 00:24:17.225 | 99.00th=[ 4883], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5342], 00:24:17.225 | 99.99th=[ 5604] 00:24:17.225 bw ( KiB/s): min=15616, max=17376, per=23.14%, avg=16113.78, stdev=577.36, samples=9 00:24:17.225 iops : min= 1952, max= 2172, avg=2014.22, stdev=72.17, samples=9 00:24:17.225 lat (msec) : 2=0.33%, 4=70.40%, 10=29.28% 00:24:17.225 cpu : usr=91.94%, sys=7.24%, ctx=8, majf=0, minf=9 00:24:17.225 IO depths : 1=0.1%, 2=21.5%, 4=52.9%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 issued rwts: total=10090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:17.225 filename1: (groupid=0, jobs=1): err= 0: pid=97774: Sat Dec 7 22:55:31 2024 00:24:17.225 read: IOPS=2268, BW=17.7MiB/s (18.6MB/s)(88.6MiB/5001msec) 00:24:17.225 slat (nsec): min=3554, max=62435, avg=13687.26, stdev=5178.91 00:24:17.225 clat (usec): min=718, max=7076, avg=3481.67, stdev=825.64 00:24:17.225 lat (usec): min=726, max=7092, avg=3495.36, stdev=826.32 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 1303], 5.00th=[ 1942], 10.00th=[ 2057], 20.00th=[ 2835], 00:24:17.225 | 30.00th=[ 3425], 40.00th=[ 3752], 50.00th=[ 3851], 60.00th=[ 3884], 00:24:17.225 | 70.00th=[ 3949], 80.00th=[ 3982], 90.00th=[ 4146], 95.00th=[ 4359], 00:24:17.225 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5145], 99.95th=[ 5276], 00:24:17.225 | 99.99th=[ 6652] 00:24:17.225 bw ( KiB/s): min=15872, max=21888, per=25.76%, avg=17934.22, stdev=2439.66, samples=9 00:24:17.225 iops : min= 1984, max= 2736, avg=2241.78, stdev=304.96, samples=9 00:24:17.225 lat (usec) : 750=0.02% 00:24:17.225 lat (msec) : 2=7.41%, 4=73.82%, 10=18.75% 00:24:17.225 cpu : usr=92.06%, sys=7.02%, ctx=40, majf=0, minf=9 00:24:17.225 IO depths : 1=0.1%, 2=11.8%, 4=58.1%, 8=30.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=95.5%, 8=4.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 issued rwts: total=11347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:17.225 00:24:17.225 Run status group 0 (all jobs): 00:24:17.225 READ: bw=68.0MiB/s (71.3MB/s), 15.8MiB/s-18.8MiB/s (16.5MB/s-19.7MB/s), io=340MiB (357MB), run=5001-5003msec 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:17.225 ************************************ 00:24:17.225 END TEST fio_dif_rand_params 00:24:17.225 ************************************ 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.225 00:24:17.225 real 0m23.025s 00:24:17.225 user 2m3.568s 00:24:17.225 sys 0m8.511s 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:17.225 22:55:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:17.225 22:55:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:17.225 22:55:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:17.225 22:55:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:17.225 22:55:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:17.225 ************************************ 00:24:17.225 START TEST fio_dif_digest 00:24:17.225 ************************************ 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.225 bdev_null0 00:24:17.225 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:17.226 [2024-12-07 22:55:31.344669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:17.226 { 00:24:17.226 "params": { 00:24:17.226 "name": "Nvme$subsystem", 00:24:17.226 "trtype": "$TEST_TRANSPORT", 00:24:17.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.226 "adrfam": "ipv4", 00:24:17.226 "trsvcid": "$NVMF_PORT", 00:24:17.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.226 "hdgst": ${hdgst:-false}, 00:24:17.226 "ddgst": ${ddgst:-false} 00:24:17.226 }, 00:24:17.226 "method": "bdev_nvme_attach_controller" 00:24:17.226 } 00:24:17.226 EOF 00:24:17.226 )") 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:17.226 "params": { 00:24:17.226 "name": "Nvme0", 00:24:17.226 "trtype": "tcp", 00:24:17.226 "traddr": "10.0.0.3", 00:24:17.226 "adrfam": "ipv4", 00:24:17.226 "trsvcid": "4420", 00:24:17.226 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:17.226 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:17.226 "hdgst": true, 00:24:17.226 "ddgst": true 00:24:17.226 }, 00:24:17.226 "method": "bdev_nvme_attach_controller" 00:24:17.226 }' 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:17.226 22:55:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:17.226 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:17.226 ... 00:24:17.226 fio-3.35 00:24:17.226 Starting 3 threads 00:24:27.225 00:24:27.225 filename0: (groupid=0, jobs=1): err= 0: pid=97880: Sat Dec 7 22:55:41 2024 00:24:27.225 read: IOPS=249, BW=31.1MiB/s (32.6MB/s)(312MiB/10009msec) 00:24:27.225 slat (nsec): min=6720, max=44178, avg=9834.48, stdev=4137.89 00:24:27.225 clat (usec): min=11486, max=13842, avg=12019.83, stdev=407.61 00:24:27.225 lat (usec): min=11494, max=13854, avg=12029.67, stdev=407.94 00:24:27.225 clat percentiles (usec): 00:24:27.225 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:24:27.225 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:24:27.225 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12911], 00:24:27.225 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13829], 99.95th=[13829], 00:24:27.225 | 99.99th=[13829] 00:24:27.225 bw ( KiB/s): min=30658, max=32256, per=33.30%, avg=31848.53, stdev=478.28, samples=19 00:24:27.225 iops : min= 239, max= 252, avg=248.79, stdev= 3.81, samples=19 00:24:27.225 lat (msec) : 20=100.00% 00:24:27.225 cpu : usr=90.34%, sys=9.10%, ctx=23, majf=0, minf=9 00:24:27.225 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:27.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.225 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:27.225 filename0: (groupid=0, jobs=1): err= 0: pid=97881: Sat Dec 7 22:55:41 2024 00:24:27.225 read: IOPS=249, BW=31.1MiB/s (32.7MB/s)(312MiB/10005msec) 00:24:27.225 slat (nsec): min=4613, max=53989, avg=14250.76, stdev=4233.20 00:24:27.225 clat (usec): min=9046, max=13783, avg=12008.37, stdev=416.92 00:24:27.225 lat (usec): min=9059, max=13797, avg=12022.62, stdev=417.34 00:24:27.225 clat percentiles (usec): 00:24:27.225 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:24:27.225 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:24:27.225 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12911], 00:24:27.225 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13829], 00:24:27.225 | 99.99th=[13829] 00:24:27.225 bw ( KiB/s): min=30720, max=32256, per=33.30%, avg=31851.79, stdev=469.84, samples=19 00:24:27.225 iops : min= 240, max= 252, avg=248.84, stdev= 3.67, samples=19 00:24:27.225 lat (msec) : 10=0.12%, 20=99.88% 00:24:27.225 cpu : usr=91.66%, sys=7.80%, ctx=8, majf=0, minf=0 00:24:27.225 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:27.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.225 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.225 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:27.225 filename0: (groupid=0, jobs=1): err= 0: pid=97882: Sat Dec 7 22:55:41 2024 00:24:27.225 read: IOPS=249, BW=31.1MiB/s (32.7MB/s)(312MiB/10005msec) 00:24:27.225 slat (nsec): min=7179, max=54364, avg=14049.31, stdev=4141.11 00:24:27.225 clat (usec): min=9047, max=13770, avg=12009.06, stdev=416.94 00:24:27.225 lat (usec): min=9060, max=13783, avg=12023.11, stdev=417.36 00:24:27.225 clat percentiles (usec): 00:24:27.226 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11731], 20.00th=[11731], 00:24:27.226 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11863], 60.00th=[11863], 00:24:27.226 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12518], 95.00th=[12911], 00:24:27.226 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13698], 99.95th=[13698], 00:24:27.226 | 99.99th=[13829] 00:24:27.226 bw ( KiB/s): min=30720, max=32256, per=33.30%, avg=31851.79, stdev=469.84, samples=19 00:24:27.226 iops : min= 240, max= 252, avg=248.84, stdev= 3.67, samples=19 00:24:27.226 lat (msec) : 10=0.12%, 20=99.88% 00:24:27.226 cpu : usr=91.29%, sys=8.17%, ctx=17, majf=0, minf=9 00:24:27.226 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:27.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:27.226 issued rwts: total=2493,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:27.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:27.226 00:24:27.226 Run status group 0 (all jobs): 00:24:27.226 READ: bw=93.4MiB/s (97.9MB/s), 31.1MiB/s-31.1MiB/s (32.6MB/s-32.7MB/s), io=935MiB (980MB), run=10005-10009msec 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:27.499 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.499 00:24:27.499 real 0m10.841s 00:24:27.499 user 0m27.903s 00:24:27.499 sys 0m2.717s 00:24:27.500 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:27.500 ************************************ 00:24:27.500 END TEST fio_dif_digest 00:24:27.500 ************************************ 00:24:27.500 22:55:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:27.500 22:55:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:27.500 22:55:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:27.500 22:55:42 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:27.500 22:55:42 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:27.500 22:55:42 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.500 22:55:42 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:27.500 22:55:42 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.500 22:55:42 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.500 rmmod nvme_tcp 00:24:27.759 rmmod nvme_fabrics 00:24:27.759 rmmod nvme_keyring 00:24:27.759 22:55:42 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.759 22:55:42 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:27.759 22:55:42 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:27.759 22:55:42 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 97152 ']' 00:24:27.759 22:55:42 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 97152 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97152 ']' 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97152 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97152 00:24:27.759 killing process with pid 97152 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97152' 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97152 00:24:27.759 22:55:42 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97152 00:24:27.759 22:55:42 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:27.759 22:55:42 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:28.325 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:28.325 Waiting for block devices as requested 00:24:28.325 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.325 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:28.325 22:55:43 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:28.325 22:55:43 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:28.325 22:55:43 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:28.325 22:55:43 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:24:28.325 22:55:43 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:24:28.325 22:55:43 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:28.325 22:55:43 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:28.326 22:55:43 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:28.326 22:55:43 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:28.326 22:55:43 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:28.326 22:55:43 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:28.326 22:55:43 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.583 22:55:43 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.583 22:55:43 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.583 22:55:43 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:28.583 00:24:28.583 real 0m58.426s 00:24:28.583 user 3m45.270s 00:24:28.583 sys 0m19.698s 00:24:28.583 ************************************ 00:24:28.583 END TEST nvmf_dif 00:24:28.583 ************************************ 00:24:28.583 22:55:43 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.583 22:55:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:28.583 22:55:43 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.583 22:55:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:28.583 22:55:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:28.583 22:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:28.583 ************************************ 00:24:28.583 START TEST nvmf_abort_qd_sizes 00:24:28.583 ************************************ 00:24:28.583 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:28.842 * Looking for test storage... 00:24:28.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:28.842 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:28.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.843 --rc genhtml_branch_coverage=1 00:24:28.843 --rc genhtml_function_coverage=1 00:24:28.843 --rc genhtml_legend=1 00:24:28.843 --rc geninfo_all_blocks=1 00:24:28.843 --rc geninfo_unexecuted_blocks=1 00:24:28.843 00:24:28.843 ' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:28.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.843 --rc genhtml_branch_coverage=1 00:24:28.843 --rc genhtml_function_coverage=1 00:24:28.843 --rc genhtml_legend=1 00:24:28.843 --rc geninfo_all_blocks=1 00:24:28.843 --rc geninfo_unexecuted_blocks=1 00:24:28.843 00:24:28.843 ' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:28.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.843 --rc genhtml_branch_coverage=1 00:24:28.843 --rc genhtml_function_coverage=1 00:24:28.843 --rc genhtml_legend=1 00:24:28.843 --rc geninfo_all_blocks=1 00:24:28.843 --rc geninfo_unexecuted_blocks=1 00:24:28.843 00:24:28.843 ' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:28.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:28.843 --rc genhtml_branch_coverage=1 00:24:28.843 --rc genhtml_function_coverage=1 00:24:28.843 --rc genhtml_legend=1 00:24:28.843 --rc geninfo_all_blocks=1 00:24:28.843 --rc geninfo_unexecuted_blocks=1 00:24:28.843 00:24:28.843 ' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:28.843 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:28.843 Cannot find device "nvmf_init_br" 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:28.843 Cannot find device "nvmf_init_br2" 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:28.843 Cannot find device "nvmf_tgt_br" 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:28.843 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:28.843 Cannot find device "nvmf_tgt_br2" 00:24:28.844 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:28.844 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:29.102 Cannot find device "nvmf_init_br" 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:29.102 Cannot find device "nvmf_init_br2" 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:29.102 Cannot find device "nvmf_tgt_br" 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:29.102 Cannot find device "nvmf_tgt_br2" 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:29.102 Cannot find device "nvmf_br" 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:29.102 Cannot find device "nvmf_init_if" 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:29.102 Cannot find device "nvmf_init_if2" 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:29.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:29.102 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:29.102 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:29.103 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:29.361 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:29.361 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:29.361 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:24:29.361 00:24:29.361 --- 10.0.0.3 ping statistics --- 00:24:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.362 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:29.362 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:29.362 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:29.362 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.082 ms 00:24:29.362 00:24:29.362 --- 10.0.0.4 ping statistics --- 00:24:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.362 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:24:29.362 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:29.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:29.362 00:24:29.362 --- 10.0.0.1 ping statistics --- 00:24:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.362 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:29.362 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:29.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:24:29.362 00:24:29.362 --- 10.0.0.2 ping statistics --- 00:24:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.362 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:29.362 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.362 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:24:29.362 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:29.362 22:55:43 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:29.930 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:29.930 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:30.188 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=98533 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 98533 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98533 ']' 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.189 22:55:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:30.189 [2024-12-07 22:55:44.870031] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:30.189 [2024-12-07 22:55:44.870306] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.447 [2024-12-07 22:55:45.012088] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.447 [2024-12-07 22:55:45.059396] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.447 [2024-12-07 22:55:45.059705] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.447 [2024-12-07 22:55:45.059976] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.447 [2024-12-07 22:55:45.060231] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.447 [2024-12-07 22:55:45.060277] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.447 [2024-12-07 22:55:45.060541] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.447 [2024-12-07 22:55:45.060769] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.447 [2024-12-07 22:55:45.061397] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:30.447 [2024-12-07 22:55:45.061405] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.447 [2024-12-07 22:55:45.100091] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:30.447 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:30.706 22:55:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:30.706 ************************************ 00:24:30.706 START TEST spdk_target_abort 00:24:30.706 ************************************ 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.706 spdk_targetn1 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.706 [2024-12-07 22:55:45.327037] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:30.706 [2024-12-07 22:55:45.355224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:30.706 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:30.707 22:55:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:33.992 Initializing NVMe Controllers 00:24:33.992 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:33.992 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:33.992 Initialization complete. Launching workers. 00:24:33.992 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10074, failed: 0 00:24:33.992 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1035, failed to submit 9039 00:24:33.992 success 836, unsuccessful 199, failed 0 00:24:33.992 22:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:33.992 22:55:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:37.284 Initializing NVMe Controllers 00:24:37.284 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:37.284 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:37.284 Initialization complete. Launching workers. 00:24:37.284 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8787, failed: 0 00:24:37.284 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1134, failed to submit 7653 00:24:37.284 success 394, unsuccessful 740, failed 0 00:24:37.284 22:55:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:37.284 22:55:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:40.570 Initializing NVMe Controllers 00:24:40.570 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:40.570 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:40.570 Initialization complete. Launching workers. 00:24:40.570 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31293, failed: 0 00:24:40.570 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2280, failed to submit 29013 00:24:40.570 success 435, unsuccessful 1845, failed 0 00:24:40.570 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:40.570 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.570 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.570 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.570 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:40.570 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.570 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98533 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98533 ']' 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98533 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98533 00:24:40.828 killing process with pid 98533 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98533' 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98533 00:24:40.828 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98533 00:24:41.086 00:24:41.086 real 0m10.384s 00:24:41.086 user 0m39.839s 00:24:41.086 sys 0m2.033s 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.086 ************************************ 00:24:41.086 END TEST spdk_target_abort 00:24:41.086 ************************************ 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:41.086 22:55:55 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:41.086 22:55:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:41.086 22:55:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:41.086 22:55:55 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:41.086 ************************************ 00:24:41.086 START TEST kernel_target_abort 00:24:41.086 ************************************ 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:41.086 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:41.087 22:55:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:41.345 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:41.345 Waiting for block devices as requested 00:24:41.604 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:41.604 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:41.604 No valid GPT data, bailing 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:41.604 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:41.863 No valid GPT data, bailing 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:41.863 No valid GPT data, bailing 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:41.863 No valid GPT data, bailing 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:41.863 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 --hostid=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 -a 10.0.0.1 -t tcp -s 4420 00:24:41.863 00:24:41.863 Discovery Log Number of Records 2, Generation counter 2 00:24:41.863 =====Discovery Log Entry 0====== 00:24:41.863 trtype: tcp 00:24:41.863 adrfam: ipv4 00:24:41.863 subtype: current discovery subsystem 00:24:41.863 treq: not specified, sq flow control disable supported 00:24:41.863 portid: 1 00:24:41.863 trsvcid: 4420 00:24:41.863 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:41.863 traddr: 10.0.0.1 00:24:41.863 eflags: none 00:24:41.863 sectype: none 00:24:41.863 =====Discovery Log Entry 1====== 00:24:41.863 trtype: tcp 00:24:41.863 adrfam: ipv4 00:24:41.864 subtype: nvme subsystem 00:24:41.864 treq: not specified, sq flow control disable supported 00:24:41.864 portid: 1 00:24:41.864 trsvcid: 4420 00:24:41.864 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:41.864 traddr: 10.0.0.1 00:24:41.864 eflags: none 00:24:41.864 sectype: none 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:41.864 22:55:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:45.152 Initializing NVMe Controllers 00:24:45.152 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:45.152 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:45.152 Initialization complete. Launching workers. 00:24:45.152 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31663, failed: 0 00:24:45.152 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31663, failed to submit 0 00:24:45.152 success 0, unsuccessful 31663, failed 0 00:24:45.152 22:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:45.152 22:55:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:48.440 Initializing NVMe Controllers 00:24:48.440 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:48.440 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:48.440 Initialization complete. Launching workers. 00:24:48.440 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63266, failed: 0 00:24:48.440 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25356, failed to submit 37910 00:24:48.440 success 0, unsuccessful 25356, failed 0 00:24:48.440 22:56:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:48.440 22:56:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:51.731 Initializing NVMe Controllers 00:24:51.731 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:51.731 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:51.731 Initialization complete. Launching workers. 00:24:51.731 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68513, failed: 0 00:24:51.731 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17098, failed to submit 51415 00:24:51.731 success 0, unsuccessful 17098, failed 0 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:51.731 22:56:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:52.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:52.864 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:52.864 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:52.864 ************************************ 00:24:52.864 END TEST kernel_target_abort 00:24:52.864 ************************************ 00:24:52.864 00:24:52.864 real 0m11.845s 00:24:52.864 user 0m5.798s 00:24:52.864 sys 0m3.451s 00:24:52.864 22:56:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:52.864 22:56:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:52.864 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:52.864 rmmod nvme_tcp 00:24:53.123 rmmod nvme_fabrics 00:24:53.123 rmmod nvme_keyring 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:53.123 Process with pid 98533 is not found 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 98533 ']' 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 98533 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98533 ']' 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98533 00:24:53.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98533) - No such process 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98533 is not found' 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:53.123 22:56:07 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:53.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:53.381 Waiting for block devices as requested 00:24:53.381 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:53.638 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:53.638 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:53.897 00:24:53.897 real 0m25.187s 00:24:53.897 user 0m46.797s 00:24:53.897 sys 0m6.910s 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:53.897 22:56:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:53.897 ************************************ 00:24:53.897 END TEST nvmf_abort_qd_sizes 00:24:53.897 ************************************ 00:24:53.897 22:56:08 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:53.897 22:56:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:53.897 22:56:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:53.897 22:56:08 -- common/autotest_common.sh@10 -- # set +x 00:24:53.897 ************************************ 00:24:53.897 START TEST keyring_file 00:24:53.897 ************************************ 00:24:53.897 22:56:08 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:53.897 * Looking for test storage... 00:24:53.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:53.897 22:56:08 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:53.897 22:56:08 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:24:53.897 22:56:08 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:54.157 22:56:08 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:54.157 22:56:08 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.157 22:56:08 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.157 --rc genhtml_branch_coverage=1 00:24:54.157 --rc genhtml_function_coverage=1 00:24:54.157 --rc genhtml_legend=1 00:24:54.157 --rc geninfo_all_blocks=1 00:24:54.157 --rc geninfo_unexecuted_blocks=1 00:24:54.157 00:24:54.157 ' 00:24:54.157 22:56:08 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.157 --rc genhtml_branch_coverage=1 00:24:54.157 --rc genhtml_function_coverage=1 00:24:54.157 --rc genhtml_legend=1 00:24:54.157 --rc geninfo_all_blocks=1 00:24:54.157 --rc geninfo_unexecuted_blocks=1 00:24:54.157 00:24:54.157 ' 00:24:54.157 22:56:08 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.157 --rc genhtml_branch_coverage=1 00:24:54.157 --rc genhtml_function_coverage=1 00:24:54.157 --rc genhtml_legend=1 00:24:54.157 --rc geninfo_all_blocks=1 00:24:54.157 --rc geninfo_unexecuted_blocks=1 00:24:54.157 00:24:54.157 ' 00:24:54.157 22:56:08 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:54.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.157 --rc genhtml_branch_coverage=1 00:24:54.157 --rc genhtml_function_coverage=1 00:24:54.157 --rc genhtml_legend=1 00:24:54.157 --rc geninfo_all_blocks=1 00:24:54.157 --rc geninfo_unexecuted_blocks=1 00:24:54.157 00:24:54.157 ' 00:24:54.157 22:56:08 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:54.157 22:56:08 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.157 22:56:08 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.157 22:56:08 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.157 22:56:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.158 22:56:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.158 22:56:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.158 22:56:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:54.158 22:56:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.158 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.y7pywWZYZm 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.y7pywWZYZm 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.y7pywWZYZm 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.y7pywWZYZm 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Em7IlsvFv9 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:24:54.158 22:56:08 keyring_file -- nvmf/common.sh@729 -- # python - 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Em7IlsvFv9 00:24:54.158 22:56:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Em7IlsvFv9 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Em7IlsvFv9 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=99427 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:54.158 22:56:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99427 00:24:54.158 22:56:08 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99427 ']' 00:24:54.158 22:56:08 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.158 22:56:08 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.158 22:56:08 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.158 22:56:08 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.158 22:56:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:54.439 [2024-12-07 22:56:08.977012] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:54.439 [2024-12-07 22:56:08.977706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99427 ] 00:24:54.439 [2024-12-07 22:56:09.117786] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.439 [2024-12-07 22:56:09.161279] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.699 [2024-12-07 22:56:09.205580] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:54.699 22:56:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:54.699 [2024-12-07 22:56:09.348499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.699 null0 00:24:54.699 [2024-12-07 22:56:09.380469] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.699 [2024-12-07 22:56:09.380653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.699 22:56:09 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:54.699 [2024-12-07 22:56:09.408466] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:54.699 request: 00:24:54.699 { 00:24:54.699 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.699 "secure_channel": false, 00:24:54.699 "listen_address": { 00:24:54.699 "trtype": "tcp", 00:24:54.699 "traddr": "127.0.0.1", 00:24:54.699 "trsvcid": "4420" 00:24:54.699 }, 00:24:54.699 "method": "nvmf_subsystem_add_listener", 00:24:54.699 "req_id": 1 00:24:54.699 } 00:24:54.699 Got JSON-RPC error response 00:24:54.699 response: 00:24:54.699 { 00:24:54.699 "code": -32602, 00:24:54.699 "message": "Invalid parameters" 00:24:54.699 } 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:54.699 22:56:09 keyring_file -- keyring/file.sh@47 -- # bperfpid=99432 00:24:54.699 22:56:09 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:54.699 22:56:09 keyring_file -- keyring/file.sh@49 -- # waitforlisten 99432 /var/tmp/bperf.sock 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99432 ']' 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:54.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:54.699 22:56:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:54.958 [2024-12-07 22:56:09.470404] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:24:54.958 [2024-12-07 22:56:09.470497] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99432 ] 00:24:54.958 [2024-12-07 22:56:09.607109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.958 [2024-12-07 22:56:09.648934] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.958 [2024-12-07 22:56:09.682171] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:55.218 22:56:09 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:55.218 22:56:09 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:24:55.218 22:56:09 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:24:55.218 22:56:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:24:55.477 22:56:10 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Em7IlsvFv9 00:24:55.477 22:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Em7IlsvFv9 00:24:55.736 22:56:10 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:24:55.736 22:56:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:24:55.736 22:56:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.736 22:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.736 22:56:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:55.994 22:56:10 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.y7pywWZYZm == \/\t\m\p\/\t\m\p\.\y\7\p\y\w\W\Z\Y\Z\m ]] 00:24:55.994 22:56:10 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:24:55.994 22:56:10 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:24:55.994 22:56:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:55.994 22:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:55.994 22:56:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:56.253 22:56:10 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Em7IlsvFv9 == \/\t\m\p\/\t\m\p\.\E\m\7\I\l\s\v\F\v\9 ]] 00:24:56.253 22:56:10 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:24:56.253 22:56:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:56.253 22:56:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:56.253 22:56:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:56.253 22:56:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:56.253 22:56:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:56.512 22:56:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:24:56.512 22:56:11 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:24:56.512 22:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:56.512 22:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:56.512 22:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:56.512 22:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:56.512 22:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:56.772 22:56:11 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:24:56.772 22:56:11 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:56.772 22:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:24:57.055 [2024-12-07 22:56:11.560114] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:57.055 nvme0n1 00:24:57.055 22:56:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:24:57.055 22:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:57.055 22:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:57.055 22:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:57.055 22:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:57.055 22:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:57.336 22:56:11 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:24:57.336 22:56:11 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:24:57.336 22:56:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:57.336 22:56:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:57.336 22:56:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:57.336 22:56:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:57.336 22:56:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:57.605 22:56:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:24:57.605 22:56:12 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:57.605 Running I/O for 1 seconds... 00:24:58.540 13739.00 IOPS, 53.67 MiB/s 00:24:58.540 Latency(us) 00:24:58.540 [2024-12-07T22:56:13.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.540 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:24:58.540 nvme0n1 : 1.01 13786.46 53.85 0.00 0.00 9260.37 3574.69 13285.93 00:24:58.540 [2024-12-07T22:56:13.306Z] =================================================================================================================== 00:24:58.540 [2024-12-07T22:56:13.306Z] Total : 13786.46 53.85 0.00 0.00 9260.37 3574.69 13285.93 00:24:58.540 { 00:24:58.540 "results": [ 00:24:58.540 { 00:24:58.540 "job": "nvme0n1", 00:24:58.540 "core_mask": "0x2", 00:24:58.540 "workload": "randrw", 00:24:58.540 "percentage": 50, 00:24:58.540 "status": "finished", 00:24:58.540 "queue_depth": 128, 00:24:58.540 "io_size": 4096, 00:24:58.540 "runtime": 1.005987, 00:24:58.540 "iops": 13786.460461218683, 00:24:58.540 "mibps": 53.85336117663548, 00:24:58.540 "io_failed": 0, 00:24:58.540 "io_timeout": 0, 00:24:58.540 "avg_latency_us": 9260.37353469805, 00:24:58.540 "min_latency_us": 3574.690909090909, 00:24:58.540 "max_latency_us": 13285.934545454546 00:24:58.540 } 00:24:58.540 ], 00:24:58.540 "core_count": 1 00:24:58.540 } 00:24:58.799 22:56:13 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:24:58.799 22:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:24:59.058 22:56:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:24:59.058 22:56:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:59.058 22:56:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.058 22:56:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.058 22:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.058 22:56:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:24:59.317 22:56:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:24:59.317 22:56:13 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:24:59.317 22:56:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:24:59.317 22:56:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.317 22:56:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.317 22:56:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:24:59.317 22:56:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.576 22:56:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:24:59.576 22:56:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:59.576 22:56:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:24:59.576 22:56:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:59.576 22:56:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:24:59.576 22:56:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.576 22:56:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:24:59.576 22:56:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:59.576 22:56:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:59.576 22:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:24:59.835 [2024-12-07 22:56:14.424538] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:59.835 [2024-12-07 22:56:14.424913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1964320 (107): Transport endpoint is not connected 00:24:59.835 [2024-12-07 22:56:14.425897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1964320 (9): request: 00:24:59.835 { 00:24:59.835 "name": "nvme0", 00:24:59.835 "trtype": "tcp", 00:24:59.835 "traddr": "127.0.0.1", 00:24:59.835 "adrfam": "ipv4", 00:24:59.835 "trsvcid": "4420", 00:24:59.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:59.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:59.835 "prchk_reftag": false, 00:24:59.835 "prchk_guard": false, 00:24:59.835 "hdgst": false, 00:24:59.835 "ddgst": false, 00:24:59.835 "psk": "key1", 00:24:59.835 "allow_unrecognized_csi": false, 00:24:59.835 "method": "bdev_nvme_attach_controller", 00:24:59.835 "req_id": 1 00:24:59.835 } 00:24:59.835 Got JSON-RPC error response 00:24:59.835 response: 00:24:59.835 { 00:24:59.835 "code": -5, 00:24:59.835 "message": "Input/output error" 00:24:59.835 } 00:24:59.835 Bad file descriptor 00:24:59.835 [2024-12-07 22:56:14.426881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:59.835 [2024-12-07 22:56:14.426927] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:24:59.835 [2024-12-07 22:56:14.426937] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:24:59.835 [2024-12-07 22:56:14.426947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:59.835 22:56:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:24:59.835 22:56:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:59.835 22:56:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:59.835 22:56:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:59.835 22:56:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:24:59.835 22:56:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:24:59.835 22:56:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:24:59.835 22:56:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:24:59.835 22:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:24:59.835 22:56:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.095 22:56:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:00.095 22:56:14 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:00.095 22:56:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.095 22:56:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:00.095 22:56:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.095 22:56:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.095 22:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.354 22:56:14 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:00.354 22:56:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:00.354 22:56:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:00.612 22:56:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:00.612 22:56:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:00.870 22:56:15 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:00.870 22:56:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.870 22:56:15 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:00.870 22:56:15 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:00.870 22:56:15 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.y7pywWZYZm 00:25:00.870 22:56:15 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:25:00.870 22:56:15 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:00.870 22:56:15 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:25:00.870 22:56:15 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:00.870 22:56:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.870 22:56:15 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:00.870 22:56:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:00.870 22:56:15 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:25:00.870 22:56:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:25:01.128 [2024-12-07 22:56:15.823347] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.y7pywWZYZm': 0100660 00:25:01.128 [2024-12-07 22:56:15.823381] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:01.128 request: 00:25:01.128 { 00:25:01.128 "name": "key0", 00:25:01.128 "path": "/tmp/tmp.y7pywWZYZm", 00:25:01.128 "method": "keyring_file_add_key", 00:25:01.128 "req_id": 1 00:25:01.128 } 00:25:01.128 Got JSON-RPC error response 00:25:01.128 response: 00:25:01.128 { 00:25:01.128 "code": -1, 00:25:01.128 "message": "Operation not permitted" 00:25:01.128 } 00:25:01.128 22:56:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:01.128 22:56:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:01.128 22:56:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:01.128 22:56:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:01.128 22:56:15 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.y7pywWZYZm 00:25:01.128 22:56:15 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:25:01.128 22:56:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.y7pywWZYZm 00:25:01.387 22:56:16 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.y7pywWZYZm 00:25:01.387 22:56:16 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:01.387 22:56:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:01.387 22:56:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.387 22:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.387 22:56:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.387 22:56:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:01.645 22:56:16 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:01.645 22:56:16 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.645 22:56:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:01.645 22:56:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.645 22:56:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.904 22:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.904 [2024-12-07 22:56:16.611530] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.y7pywWZYZm': No such file or directory 00:25:01.904 [2024-12-07 22:56:16.611581] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:01.904 [2024-12-07 22:56:16.611616] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:01.904 [2024-12-07 22:56:16.611624] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:01.904 [2024-12-07 22:56:16.611632] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:01.904 [2024-12-07 22:56:16.611639] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:01.904 request: 00:25:01.904 { 00:25:01.904 "name": "nvme0", 00:25:01.904 "trtype": "tcp", 00:25:01.904 "traddr": "127.0.0.1", 00:25:01.904 "adrfam": "ipv4", 00:25:01.904 "trsvcid": "4420", 00:25:01.904 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:01.904 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:01.904 "prchk_reftag": false, 00:25:01.904 "prchk_guard": false, 00:25:01.904 "hdgst": false, 00:25:01.904 "ddgst": false, 00:25:01.904 "psk": "key0", 00:25:01.904 "allow_unrecognized_csi": false, 00:25:01.904 "method": "bdev_nvme_attach_controller", 00:25:01.904 "req_id": 1 00:25:01.904 } 00:25:01.904 Got JSON-RPC error response 00:25:01.904 response: 00:25:01.904 { 00:25:01.904 "code": -19, 00:25:01.904 "message": "No such device" 00:25:01.904 } 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:01.904 22:56:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:01.904 22:56:16 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:01.904 22:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:02.162 22:56:16 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:02.162 22:56:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:02.162 22:56:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:02.162 22:56:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:02.162 22:56:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:02.162 22:56:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:02.162 22:56:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VxsqUDjxC4 00:25:02.162 22:56:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:02.162 22:56:16 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:02.162 22:56:16 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:02.162 22:56:16 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:02.162 22:56:16 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:02.162 22:56:16 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:02.162 22:56:16 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:02.421 22:56:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VxsqUDjxC4 00:25:02.421 22:56:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VxsqUDjxC4 00:25:02.421 22:56:16 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.VxsqUDjxC4 00:25:02.421 22:56:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VxsqUDjxC4 00:25:02.421 22:56:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VxsqUDjxC4 00:25:02.679 22:56:17 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:02.679 22:56:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:02.938 nvme0n1 00:25:02.938 22:56:17 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:02.938 22:56:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:02.938 22:56:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.938 22:56:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.938 22:56:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.938 22:56:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.196 22:56:17 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:03.196 22:56:17 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:03.196 22:56:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:03.454 22:56:17 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:03.454 22:56:17 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:03.454 22:56:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.455 22:56:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.455 22:56:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.712 22:56:18 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:03.712 22:56:18 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:03.712 22:56:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.712 22:56:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.712 22:56:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.713 22:56:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.713 22:56:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.971 22:56:18 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:03.971 22:56:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:03.971 22:56:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:03.971 22:56:18 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:03.971 22:56:18 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:03.971 22:56:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.539 22:56:19 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:04.539 22:56:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.VxsqUDjxC4 00:25:04.539 22:56:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.VxsqUDjxC4 00:25:04.539 22:56:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Em7IlsvFv9 00:25:04.539 22:56:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Em7IlsvFv9 00:25:04.799 22:56:19 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.799 22:56:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:05.368 nvme0n1 00:25:05.368 22:56:19 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:05.368 22:56:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:05.628 22:56:20 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:05.628 "subsystems": [ 00:25:05.628 { 00:25:05.628 "subsystem": "keyring", 00:25:05.628 "config": [ 00:25:05.628 { 00:25:05.628 "method": "keyring_file_add_key", 00:25:05.628 "params": { 00:25:05.628 "name": "key0", 00:25:05.628 "path": "/tmp/tmp.VxsqUDjxC4" 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "keyring_file_add_key", 00:25:05.629 "params": { 00:25:05.629 "name": "key1", 00:25:05.629 "path": "/tmp/tmp.Em7IlsvFv9" 00:25:05.629 } 00:25:05.629 } 00:25:05.629 ] 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "subsystem": "iobuf", 00:25:05.629 "config": [ 00:25:05.629 { 00:25:05.629 "method": "iobuf_set_options", 00:25:05.629 "params": { 00:25:05.629 "small_pool_count": 8192, 00:25:05.629 "large_pool_count": 1024, 00:25:05.629 "small_bufsize": 8192, 00:25:05.629 "large_bufsize": 135168 00:25:05.629 } 00:25:05.629 } 00:25:05.629 ] 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "subsystem": "sock", 00:25:05.629 "config": [ 00:25:05.629 { 00:25:05.629 "method": "sock_set_default_impl", 00:25:05.629 "params": { 00:25:05.629 "impl_name": "uring" 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "sock_impl_set_options", 00:25:05.629 "params": { 00:25:05.629 "impl_name": "ssl", 00:25:05.629 "recv_buf_size": 4096, 00:25:05.629 "send_buf_size": 4096, 00:25:05.629 "enable_recv_pipe": true, 00:25:05.629 "enable_quickack": false, 00:25:05.629 "enable_placement_id": 0, 00:25:05.629 "enable_zerocopy_send_server": true, 00:25:05.629 "enable_zerocopy_send_client": false, 00:25:05.629 "zerocopy_threshold": 0, 00:25:05.629 "tls_version": 0, 00:25:05.629 "enable_ktls": false 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "sock_impl_set_options", 00:25:05.629 "params": { 00:25:05.629 "impl_name": "posix", 00:25:05.629 "recv_buf_size": 2097152, 00:25:05.629 "send_buf_size": 2097152, 00:25:05.629 "enable_recv_pipe": true, 00:25:05.629 "enable_quickack": false, 00:25:05.629 "enable_placement_id": 0, 00:25:05.629 "enable_zerocopy_send_server": true, 00:25:05.629 "enable_zerocopy_send_client": false, 00:25:05.629 "zerocopy_threshold": 0, 00:25:05.629 "tls_version": 0, 00:25:05.629 "enable_ktls": false 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "sock_impl_set_options", 00:25:05.629 "params": { 00:25:05.629 "impl_name": "uring", 00:25:05.629 "recv_buf_size": 2097152, 00:25:05.629 "send_buf_size": 2097152, 00:25:05.629 "enable_recv_pipe": true, 00:25:05.629 "enable_quickack": false, 00:25:05.629 "enable_placement_id": 0, 00:25:05.629 "enable_zerocopy_send_server": false, 00:25:05.629 "enable_zerocopy_send_client": false, 00:25:05.629 "zerocopy_threshold": 0, 00:25:05.629 "tls_version": 0, 00:25:05.629 "enable_ktls": false 00:25:05.629 } 00:25:05.629 } 00:25:05.629 ] 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "subsystem": "vmd", 00:25:05.629 "config": [] 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "subsystem": "accel", 00:25:05.629 "config": [ 00:25:05.629 { 00:25:05.629 "method": "accel_set_options", 00:25:05.629 "params": { 00:25:05.629 "small_cache_size": 128, 00:25:05.629 "large_cache_size": 16, 00:25:05.629 "task_count": 2048, 00:25:05.629 "sequence_count": 2048, 00:25:05.629 "buf_count": 2048 00:25:05.629 } 00:25:05.629 } 00:25:05.629 ] 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "subsystem": "bdev", 00:25:05.629 "config": [ 00:25:05.629 { 00:25:05.629 "method": "bdev_set_options", 00:25:05.629 "params": { 00:25:05.629 "bdev_io_pool_size": 65535, 00:25:05.629 "bdev_io_cache_size": 256, 00:25:05.629 "bdev_auto_examine": true, 00:25:05.629 "iobuf_small_cache_size": 128, 00:25:05.629 "iobuf_large_cache_size": 16 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "bdev_raid_set_options", 00:25:05.629 "params": { 00:25:05.629 "process_window_size_kb": 1024, 00:25:05.629 "process_max_bandwidth_mb_sec": 0 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "bdev_iscsi_set_options", 00:25:05.629 "params": { 00:25:05.629 "timeout_sec": 30 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "bdev_nvme_set_options", 00:25:05.629 "params": { 00:25:05.629 "action_on_timeout": "none", 00:25:05.629 "timeout_us": 0, 00:25:05.629 "timeout_admin_us": 0, 00:25:05.629 "keep_alive_timeout_ms": 10000, 00:25:05.629 "arbitration_burst": 0, 00:25:05.629 "low_priority_weight": 0, 00:25:05.629 "medium_priority_weight": 0, 00:25:05.629 "high_priority_weight": 0, 00:25:05.629 "nvme_adminq_poll_period_us": 10000, 00:25:05.629 "nvme_ioq_poll_period_us": 0, 00:25:05.629 "io_queue_requests": 512, 00:25:05.629 "delay_cmd_submit": true, 00:25:05.629 "transport_retry_count": 4, 00:25:05.629 "bdev_retry_count": 3, 00:25:05.629 "transport_ack_timeout": 0, 00:25:05.629 "ctrlr_loss_timeout_sec": 0, 00:25:05.629 "reconnect_delay_sec": 0, 00:25:05.629 "fast_io_fail_timeout_sec": 0, 00:25:05.629 "disable_auto_failback": false, 00:25:05.629 "generate_uuids": false, 00:25:05.629 "transport_tos": 0, 00:25:05.629 "nvme_error_stat": false, 00:25:05.629 "rdma_srq_size": 0, 00:25:05.629 "io_path_stat": false, 00:25:05.629 "allow_accel_sequence": false, 00:25:05.629 "rdma_max_cq_size": 0, 00:25:05.629 "rdma_cm_event_timeout_ms": 0, 00:25:05.629 "dhchap_digests": [ 00:25:05.629 "sha256", 00:25:05.629 "sha384", 00:25:05.629 "sha512" 00:25:05.629 ], 00:25:05.629 "dhchap_dhgroups": [ 00:25:05.629 "null", 00:25:05.629 "ffdhe2048", 00:25:05.629 "ffdhe3072", 00:25:05.629 "ffdhe4096", 00:25:05.629 "ffdhe6144", 00:25:05.629 "ffdhe8192" 00:25:05.629 ] 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "bdev_nvme_attach_controller", 00:25:05.629 "params": { 00:25:05.629 "name": "nvme0", 00:25:05.629 "trtype": "TCP", 00:25:05.629 "adrfam": "IPv4", 00:25:05.629 "traddr": "127.0.0.1", 00:25:05.629 "trsvcid": "4420", 00:25:05.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.629 "prchk_reftag": false, 00:25:05.629 "prchk_guard": false, 00:25:05.629 "ctrlr_loss_timeout_sec": 0, 00:25:05.629 "reconnect_delay_sec": 0, 00:25:05.629 "fast_io_fail_timeout_sec": 0, 00:25:05.629 "psk": "key0", 00:25:05.629 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.629 "hdgst": false, 00:25:05.629 "ddgst": false 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "bdev_nvme_set_hotplug", 00:25:05.629 "params": { 00:25:05.629 "period_us": 100000, 00:25:05.629 "enable": false 00:25:05.629 } 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "method": "bdev_wait_for_examine" 00:25:05.629 } 00:25:05.629 ] 00:25:05.629 }, 00:25:05.629 { 00:25:05.629 "subsystem": "nbd", 00:25:05.629 "config": [] 00:25:05.629 } 00:25:05.629 ] 00:25:05.629 }' 00:25:05.629 22:56:20 keyring_file -- keyring/file.sh@115 -- # killprocess 99432 00:25:05.629 22:56:20 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99432 ']' 00:25:05.629 22:56:20 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99432 00:25:05.629 22:56:20 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:05.629 22:56:20 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:05.629 22:56:20 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99432 00:25:05.630 22:56:20 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:05.630 22:56:20 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:05.630 killing process with pid 99432 00:25:05.630 22:56:20 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99432' 00:25:05.630 22:56:20 keyring_file -- common/autotest_common.sh@969 -- # kill 99432 00:25:05.630 Received shutdown signal, test time was about 1.000000 seconds 00:25:05.630 00:25:05.630 Latency(us) 00:25:05.630 [2024-12-07T22:56:20.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.630 [2024-12-07T22:56:20.396Z] =================================================================================================================== 00:25:05.630 [2024-12-07T22:56:20.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.630 22:56:20 keyring_file -- common/autotest_common.sh@974 -- # wait 99432 00:25:05.630 22:56:20 keyring_file -- keyring/file.sh@118 -- # bperfpid=99677 00:25:05.630 22:56:20 keyring_file -- keyring/file.sh@120 -- # waitforlisten 99677 /var/tmp/bperf.sock 00:25:05.630 22:56:20 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99677 ']' 00:25:05.630 22:56:20 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:05.630 22:56:20 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:05.630 "subsystems": [ 00:25:05.630 { 00:25:05.630 "subsystem": "keyring", 00:25:05.630 "config": [ 00:25:05.630 { 00:25:05.630 "method": "keyring_file_add_key", 00:25:05.630 "params": { 00:25:05.630 "name": "key0", 00:25:05.630 "path": "/tmp/tmp.VxsqUDjxC4" 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "keyring_file_add_key", 00:25:05.630 "params": { 00:25:05.630 "name": "key1", 00:25:05.630 "path": "/tmp/tmp.Em7IlsvFv9" 00:25:05.630 } 00:25:05.630 } 00:25:05.630 ] 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "subsystem": "iobuf", 00:25:05.630 "config": [ 00:25:05.630 { 00:25:05.630 "method": "iobuf_set_options", 00:25:05.630 "params": { 00:25:05.630 "small_pool_count": 8192, 00:25:05.630 "large_pool_count": 1024, 00:25:05.630 "small_bufsize": 8192, 00:25:05.630 "large_bufsize": 135168 00:25:05.630 } 00:25:05.630 } 00:25:05.630 ] 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "subsystem": "sock", 00:25:05.630 "config": [ 00:25:05.630 { 00:25:05.630 "method": "sock_set_default_impl", 00:25:05.630 "params": { 00:25:05.630 "impl_name": "uring" 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "sock_impl_set_options", 00:25:05.630 "params": { 00:25:05.630 "impl_name": "ssl", 00:25:05.630 "recv_buf_size": 4096, 00:25:05.630 "send_buf_size": 4096, 00:25:05.630 "enable_recv_pipe": true, 00:25:05.630 "enable_quickack": false, 00:25:05.630 "enable_placement_id": 0, 00:25:05.630 "enable_zerocopy_send_server": true, 00:25:05.630 "enable_zerocopy_send_client": false, 00:25:05.630 "zerocopy_threshold": 0, 00:25:05.630 "tls_version": 0, 00:25:05.630 "enable_ktls": false 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "sock_impl_set_options", 00:25:05.630 "params": { 00:25:05.630 "impl_name": "posix", 00:25:05.630 "recv_buf_size": 2097152, 00:25:05.630 "send_buf_size": 2097152, 00:25:05.630 "enable_recv_pipe": true, 00:25:05.630 "enable_quickack": false, 00:25:05.630 "enable_placement_id": 0, 00:25:05.630 "enable_zerocopy_send_server": true, 00:25:05.630 "enable_zerocopy_send_client": false, 00:25:05.630 "zerocopy_threshold": 0, 00:25:05.630 "tls_version": 0, 00:25:05.630 "enable_ktls": false 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "sock_impl_set_options", 00:25:05.630 "params": { 00:25:05.630 "impl_name": "uring", 00:25:05.630 "recv_buf_size": 2097152, 00:25:05.630 "send_buf_size": 2097152, 00:25:05.630 "enable_recv_pipe": true, 00:25:05.630 "enable_quickack": false, 00:25:05.630 "enable_placement_id": 0, 00:25:05.630 "enable_zerocopy_send_server": false, 00:25:05.630 "enable_zerocopy_send_client": false, 00:25:05.630 "zerocopy_threshold": 0, 00:25:05.630 "tls_version": 0, 00:25:05.630 "enable_ktls": false 00:25:05.630 } 00:25:05.630 } 00:25:05.630 ] 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "subsystem": "vmd", 00:25:05.630 "config": [] 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "subsystem": "accel", 00:25:05.630 "config": [ 00:25:05.630 { 00:25:05.630 "method": "accel_set_options", 00:25:05.630 "params": { 00:25:05.630 "small_cache_size": 128, 00:25:05.630 "large_cache_size": 16, 00:25:05.630 "task_count": 2048, 00:25:05.630 "sequence_count": 2048, 00:25:05.630 "buf_count": 2048 00:25:05.630 } 00:25:05.630 } 00:25:05.630 ] 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "subsystem": "bdev", 00:25:05.630 "config": [ 00:25:05.630 { 00:25:05.630 "method": "bdev_set_options", 00:25:05.630 "params": { 00:25:05.630 "bdev_io_pool_size": 65535, 00:25:05.630 "bdev_io_cache_size": 256, 00:25:05.630 "bdev_auto_examine": true, 00:25:05.630 "iobuf_small_cache_size": 128, 00:25:05.630 "iobuf_large_cache_size": 16 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "bdev_raid_set_options", 00:25:05.630 "params": { 00:25:05.630 "process_window_size_kb": 1024, 00:25:05.630 "process_max_bandwidth_mb_sec": 0 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "bdev_iscsi_set_options", 00:25:05.630 "params": { 00:25:05.630 "timeout_sec": 30 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "bdev_nvme_set_options", 00:25:05.630 "params": { 00:25:05.630 "action_on_timeout": "none", 00:25:05.630 "timeout_us": 0, 00:25:05.630 "timeout_admin_us": 0, 00:25:05.630 "keep_alive_timeout_ms": 10000, 00:25:05.630 "arbitration_burst": 0, 00:25:05.630 "low_priority_weight": 0, 00:25:05.630 "medium_priority_weight": 0, 00:25:05.630 "high_priority_weight": 0, 00:25:05.630 "nvme_adminq_poll_period_us": 10000, 00:25:05.630 "nvme_ioq_poll_period_us": 0, 00:25:05.630 22:56:20 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:05.630 "io_queue_requests": 512, 00:25:05.630 "delay_cmd_submit": true, 00:25:05.630 "transport_retry_count": 4, 00:25:05.630 "bdev_retry_count": 3, 00:25:05.630 "transport_ack_timeout": 0, 00:25:05.630 "ctrlr_loss_timeout_sec": 0, 00:25:05.630 "reconnect_delay_sec": 0, 00:25:05.630 "fast_io_fail_timeout_sec": 0, 00:25:05.630 "disable_auto_failback": false, 00:25:05.630 "generate_uuids": false, 00:25:05.630 "transport_tos": 0, 00:25:05.630 "nvme_error_stat": false, 00:25:05.630 "rdma_srq_size": 0, 00:25:05.630 "io_path_stat": false, 00:25:05.630 "allow_accel_sequence": false, 00:25:05.630 "rdma_max_cq_size": 0, 00:25:05.630 "rdma_cm_event_timeout_ms": 0, 00:25:05.630 "dhchap_digests": [ 00:25:05.630 "sha256", 00:25:05.630 "sha384", 00:25:05.630 "sha512" 00:25:05.630 ], 00:25:05.630 "dhchap_dhgroups": [ 00:25:05.630 "null", 00:25:05.630 "ffdhe2048", 00:25:05.630 "ffdhe3072", 00:25:05.630 "ffdhe4096", 00:25:05.630 "ffdhe6144", 00:25:05.630 "ffdhe8192" 00:25:05.630 ] 00:25:05.630 } 00:25:05.630 }, 00:25:05.630 { 00:25:05.630 "method": "bdev_nvme_attach_controller", 00:25:05.630 "params": { 00:25:05.630 "name": "nvme0", 00:25:05.630 "trtype": "TCP", 00:25:05.630 "adrfam": "IPv4", 00:25:05.630 "traddr": "127.0.0.1", 00:25:05.630 "trsvcid": "4420", 00:25:05.630 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:05.630 "prchk_reftag": false, 00:25:05.631 "prchk_guard": false, 00:25:05.631 "ctrlr_loss_timeout_sec": 0, 00:25:05.631 "reconnect_delay_sec": 0, 00:25:05.631 "fast_io_fail_timeout_sec": 0, 00:25:05.631 "psk": "key0", 00:25:05.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:05.631 "hdgst": false, 00:25:05.631 "ddgst": false 00:25:05.631 } 00:25:05.631 }, 00:25:05.631 { 00:25:05.631 "method": "bdev_nvme_set_hotplug", 00:25:05.631 "params": { 00:25:05.631 "period_us": 100000, 00:25:05.631 "enable": false 00:25:05.631 } 00:25:05.631 }, 00:25:05.631 { 00:25:05.631 "method": "bdev_wait_for_examine" 00:25:05.631 } 00:25:05.631 ] 00:25:05.631 }, 00:25:05.631 { 00:25:05.631 "subsystem": "nbd", 00:25:05.631 "config": [] 00:25:05.631 } 00:25:05.631 ] 00:25:05.631 }' 00:25:05.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:05.631 22:56:20 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:05.631 22:56:20 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:05.631 22:56:20 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:05.631 22:56:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:05.631 [2024-12-07 22:56:20.353365] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:05.631 [2024-12-07 22:56:20.353467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99677 ] 00:25:05.890 [2024-12-07 22:56:20.483728] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.890 [2024-12-07 22:56:20.516884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.890 [2024-12-07 22:56:20.624365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:06.149 [2024-12-07 22:56:20.659923] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:06.717 22:56:21 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.717 22:56:21 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:06.717 22:56:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:06.717 22:56:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:06.717 22:56:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.976 22:56:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:06.977 22:56:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:06.977 22:56:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.977 22:56:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.977 22:56:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.977 22:56:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.977 22:56:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.236 22:56:21 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:07.236 22:56:21 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:07.236 22:56:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:07.236 22:56:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.236 22:56:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:07.236 22:56:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.236 22:56:21 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.495 22:56:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:07.495 22:56:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:07.495 22:56:22 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:07.495 22:56:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:07.754 22:56:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:07.754 22:56:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:07.754 22:56:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.VxsqUDjxC4 /tmp/tmp.Em7IlsvFv9 00:25:07.754 22:56:22 keyring_file -- keyring/file.sh@20 -- # killprocess 99677 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99677 ']' 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99677 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99677 00:25:07.754 killing process with pid 99677 00:25:07.754 Received shutdown signal, test time was about 1.000000 seconds 00:25:07.754 00:25:07.754 Latency(us) 00:25:07.754 [2024-12-07T22:56:22.520Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.754 [2024-12-07T22:56:22.520Z] =================================================================================================================== 00:25:07.754 [2024-12-07T22:56:22.520Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99677' 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@969 -- # kill 99677 00:25:07.754 22:56:22 keyring_file -- common/autotest_common.sh@974 -- # wait 99677 00:25:08.014 22:56:22 keyring_file -- keyring/file.sh@21 -- # killprocess 99427 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99427 ']' 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99427 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99427 00:25:08.014 killing process with pid 99427 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99427' 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@969 -- # kill 99427 00:25:08.014 22:56:22 keyring_file -- common/autotest_common.sh@974 -- # wait 99427 00:25:08.274 00:25:08.274 real 0m14.220s 00:25:08.274 user 0m36.850s 00:25:08.274 sys 0m2.548s 00:25:08.274 22:56:22 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:08.274 22:56:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:08.274 ************************************ 00:25:08.274 END TEST keyring_file 00:25:08.274 ************************************ 00:25:08.274 22:56:22 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:25:08.274 22:56:22 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:08.274 22:56:22 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:08.274 22:56:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:08.274 22:56:22 -- common/autotest_common.sh@10 -- # set +x 00:25:08.274 ************************************ 00:25:08.274 START TEST keyring_linux 00:25:08.274 ************************************ 00:25:08.274 22:56:22 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:08.274 Joined session keyring: 664795183 00:25:08.274 * Looking for test storage... 00:25:08.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:08.274 22:56:22 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:08.274 22:56:22 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:25:08.274 22:56:22 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:08.274 22:56:22 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:08.274 22:56:22 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:08.274 22:56:23 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:08.274 22:56:23 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:08.274 22:56:23 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:08.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.274 --rc genhtml_branch_coverage=1 00:25:08.274 --rc genhtml_function_coverage=1 00:25:08.274 --rc genhtml_legend=1 00:25:08.274 --rc geninfo_all_blocks=1 00:25:08.274 --rc geninfo_unexecuted_blocks=1 00:25:08.274 00:25:08.274 ' 00:25:08.274 22:56:23 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:08.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.275 --rc genhtml_branch_coverage=1 00:25:08.275 --rc genhtml_function_coverage=1 00:25:08.275 --rc genhtml_legend=1 00:25:08.275 --rc geninfo_all_blocks=1 00:25:08.275 --rc geninfo_unexecuted_blocks=1 00:25:08.275 00:25:08.275 ' 00:25:08.275 22:56:23 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:08.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.275 --rc genhtml_branch_coverage=1 00:25:08.275 --rc genhtml_function_coverage=1 00:25:08.275 --rc genhtml_legend=1 00:25:08.275 --rc geninfo_all_blocks=1 00:25:08.275 --rc geninfo_unexecuted_blocks=1 00:25:08.275 00:25:08.275 ' 00:25:08.275 22:56:23 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:08.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:08.275 --rc genhtml_branch_coverage=1 00:25:08.275 --rc genhtml_function_coverage=1 00:25:08.275 --rc genhtml_legend=1 00:25:08.275 --rc geninfo_all_blocks=1 00:25:08.275 --rc geninfo_unexecuted_blocks=1 00:25:08.275 00:25:08.275 ' 00:25:08.275 22:56:23 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=172623d7-6ce2-4bd9-8edf-50e4cb75e1d3 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:08.275 22:56:23 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:08.275 22:56:23 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.275 22:56:23 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.275 22:56:23 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.275 22:56:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.275 22:56:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.275 22:56:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.275 22:56:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:08.275 22:56:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:08.275 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:08.275 22:56:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:08.275 22:56:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:08.275 22:56:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:08.275 22:56:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:08.275 22:56:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:08.275 22:56:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:08.275 22:56:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:08.275 22:56:23 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:08.535 /tmp/:spdk-test:key0 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:08.535 22:56:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:08.535 22:56:23 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:08.535 22:56:23 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:08.535 22:56:23 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:08.535 22:56:23 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:08.535 22:56:23 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:08.535 22:56:23 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:08.535 /tmp/:spdk-test:key1 00:25:08.535 22:56:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:08.535 22:56:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=99800 00:25:08.535 22:56:23 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:08.535 22:56:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 99800 00:25:08.535 22:56:23 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99800 ']' 00:25:08.535 22:56:23 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.535 22:56:23 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:08.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.535 22:56:23 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.535 22:56:23 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:08.535 22:56:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:08.535 [2024-12-07 22:56:23.198724] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:08.535 [2024-12-07 22:56:23.198830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99800 ] 00:25:08.794 [2024-12-07 22:56:23.326696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.794 [2024-12-07 22:56:23.357848] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.794 [2024-12-07 22:56:23.390118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:09.363 22:56:24 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:09.363 22:56:24 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:09.363 22:56:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:09.363 22:56:24 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.363 22:56:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:09.363 [2024-12-07 22:56:24.078525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.363 null0 00:25:09.363 [2024-12-07 22:56:24.110495] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.363 [2024-12-07 22:56:24.110687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:09.622 22:56:24 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.622 22:56:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:09.622 338131694 00:25:09.622 22:56:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:09.622 441028035 00:25:09.622 22:56:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=99818 00:25:09.622 22:56:24 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:09.622 22:56:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 99818 /var/tmp/bperf.sock 00:25:09.622 22:56:24 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99818 ']' 00:25:09.622 22:56:24 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.622 22:56:24 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:09.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.622 22:56:24 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.622 22:56:24 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:09.622 22:56:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:09.622 [2024-12-07 22:56:24.192925] Starting SPDK v24.09.1-pre git sha1 b18e1bd62 / DPDK 22.11.4 initialization... 00:25:09.622 [2024-12-07 22:56:24.193039] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99818 ] 00:25:09.622 [2024-12-07 22:56:24.333389] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.622 [2024-12-07 22:56:24.375014] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.556 22:56:25 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.556 22:56:25 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:10.556 22:56:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:10.556 22:56:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:10.814 22:56:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:10.814 22:56:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:11.073 [2024-12-07 22:56:25.582687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:11.073 22:56:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:11.073 22:56:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:11.073 [2024-12-07 22:56:25.825712] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.332 nvme0n1 00:25:11.332 22:56:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:11.332 22:56:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:11.332 22:56:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:11.332 22:56:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:11.332 22:56:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:11.332 22:56:25 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.591 22:56:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:11.591 22:56:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:11.591 22:56:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:11.591 22:56:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:11.591 22:56:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.591 22:56:26 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.591 22:56:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:11.850 22:56:26 keyring_linux -- keyring/linux.sh@25 -- # sn=338131694 00:25:11.850 22:56:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:11.850 22:56:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:11.850 22:56:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 338131694 == \3\3\8\1\3\1\6\9\4 ]] 00:25:11.850 22:56:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 338131694 00:25:11.850 22:56:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:11.850 22:56:26 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.850 Running I/O for 1 seconds... 00:25:12.785 15502.00 IOPS, 60.55 MiB/s 00:25:12.785 Latency(us) 00:25:12.785 [2024-12-07T22:56:27.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.785 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:12.785 nvme0n1 : 1.05 14911.38 58.25 0.00 0.00 8515.90 7804.74 50760.61 00:25:12.785 [2024-12-07T22:56:27.551Z] =================================================================================================================== 00:25:12.785 [2024-12-07T22:56:27.551Z] Total : 14911.38 58.25 0.00 0.00 8515.90 7804.74 50760.61 00:25:12.785 { 00:25:12.785 "results": [ 00:25:12.785 { 00:25:12.785 "job": "nvme0n1", 00:25:12.785 "core_mask": "0x2", 00:25:12.785 "workload": "randread", 00:25:12.785 "status": "finished", 00:25:12.785 "queue_depth": 128, 00:25:12.785 "io_size": 4096, 00:25:12.785 "runtime": 1.048327, 00:25:12.785 "iops": 14911.3778429822, 00:25:12.785 "mibps": 58.24756969914922, 00:25:12.785 "io_failed": 0, 00:25:12.785 "io_timeout": 0, 00:25:12.785 "avg_latency_us": 8515.895382897554, 00:25:12.785 "min_latency_us": 7804.741818181818, 00:25:12.785 "max_latency_us": 50760.61090909091 00:25:12.785 } 00:25:12.785 ], 00:25:12.785 "core_count": 1 00:25:12.785 } 00:25:13.044 22:56:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:13.044 22:56:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:13.302 22:56:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:13.303 22:56:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:13.303 22:56:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:13.303 22:56:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:13.303 22:56:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:13.303 22:56:27 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:13.303 22:56:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:13.303 22:56:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:13.303 22:56:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:13.303 22:56:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:13.303 22:56:28 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:25:13.303 22:56:28 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:13.303 22:56:28 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:13.562 22:56:28 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:13.562 [2024-12-07 22:56:28.273908] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:13.562 [2024-12-07 22:56:28.274647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61f30 (107): Transport endpoint is not connected 00:25:13.562 [2024-12-07 22:56:28.275603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d61f30 (9): Bad file descriptor 00:25:13.562 [2024-12-07 22:56:28.276599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:13.562 [2024-12-07 22:56:28.276617] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:13.562 [2024-12-07 22:56:28.276641] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:13.562 [2024-12-07 22:56:28.276650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:13.562 request: 00:25:13.562 { 00:25:13.562 "name": "nvme0", 00:25:13.562 "trtype": "tcp", 00:25:13.562 "traddr": "127.0.0.1", 00:25:13.562 "adrfam": "ipv4", 00:25:13.562 "trsvcid": "4420", 00:25:13.562 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:13.562 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:13.562 "prchk_reftag": false, 00:25:13.562 "prchk_guard": false, 00:25:13.562 "hdgst": false, 00:25:13.562 "ddgst": false, 00:25:13.562 "psk": ":spdk-test:key1", 00:25:13.562 "allow_unrecognized_csi": false, 00:25:13.562 "method": "bdev_nvme_attach_controller", 00:25:13.562 "req_id": 1 00:25:13.562 } 00:25:13.562 Got JSON-RPC error response 00:25:13.562 response: 00:25:13.562 { 00:25:13.562 "code": -5, 00:25:13.562 "message": "Input/output error" 00:25:13.562 } 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@33 -- # sn=338131694 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 338131694 00:25:13.562 1 links removed 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@33 -- # sn=441028035 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 441028035 00:25:13.562 1 links removed 00:25:13.562 22:56:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 99818 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99818 ']' 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99818 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.562 22:56:28 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99818 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:13.822 killing process with pid 99818 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99818' 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@969 -- # kill 99818 00:25:13.822 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.822 00:25:13.822 Latency(us) 00:25:13.822 [2024-12-07T22:56:28.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.822 [2024-12-07T22:56:28.588Z] =================================================================================================================== 00:25:13.822 [2024-12-07T22:56:28.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@974 -- # wait 99818 00:25:13.822 22:56:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 99800 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99800 ']' 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99800 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99800 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:13.822 killing process with pid 99800 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99800' 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@969 -- # kill 99800 00:25:13.822 22:56:28 keyring_linux -- common/autotest_common.sh@974 -- # wait 99800 00:25:14.082 00:25:14.082 real 0m5.881s 00:25:14.082 user 0m11.534s 00:25:14.082 sys 0m1.354s 00:25:14.082 22:56:28 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:14.082 22:56:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:14.082 ************************************ 00:25:14.082 END TEST keyring_linux 00:25:14.082 ************************************ 00:25:14.082 22:56:28 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:14.082 22:56:28 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:14.082 22:56:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:14.082 22:56:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:14.082 22:56:28 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:14.082 22:56:28 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:14.082 22:56:28 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:14.082 22:56:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:14.082 22:56:28 -- common/autotest_common.sh@10 -- # set +x 00:25:14.082 22:56:28 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:14.082 22:56:28 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:14.082 22:56:28 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:14.082 22:56:28 -- common/autotest_common.sh@10 -- # set +x 00:25:15.986 INFO: APP EXITING 00:25:15.986 INFO: killing all VMs 00:25:15.986 INFO: killing vhost app 00:25:15.986 INFO: EXIT DONE 00:25:16.555 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:16.555 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:16.555 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:17.493 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:17.493 Cleaning 00:25:17.493 Removing: /var/run/dpdk/spdk0/config 00:25:17.493 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:17.493 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:17.493 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:17.493 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:17.493 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:17.493 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:17.494 Removing: /var/run/dpdk/spdk1/config 00:25:17.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:17.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:17.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:17.494 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:17.494 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:17.494 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:17.494 Removing: /var/run/dpdk/spdk2/config 00:25:17.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:17.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:17.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:17.494 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:17.494 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:17.494 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:17.494 Removing: /var/run/dpdk/spdk3/config 00:25:17.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:17.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:17.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:17.494 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:17.494 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:17.494 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:17.494 Removing: /var/run/dpdk/spdk4/config 00:25:17.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:17.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:17.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:17.494 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:17.494 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:17.494 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:17.494 Removing: /dev/shm/nvmf_trace.0 00:25:17.494 Removing: /dev/shm/spdk_tgt_trace.pid68952 00:25:17.494 Removing: /var/run/dpdk/spdk0 00:25:17.494 Removing: /var/run/dpdk/spdk1 00:25:17.494 Removing: /var/run/dpdk/spdk2 00:25:17.494 Removing: /var/run/dpdk/spdk3 00:25:17.494 Removing: /var/run/dpdk/spdk4 00:25:17.494 Removing: /var/run/dpdk/spdk_pid68799 00:25:17.494 Removing: /var/run/dpdk/spdk_pid68952 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69145 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69226 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69246 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69356 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69365 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69500 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69696 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69849 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69922 00:25:17.494 Removing: /var/run/dpdk/spdk_pid69993 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70092 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70164 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70203 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70233 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70302 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70389 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70835 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70874 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70912 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70915 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70976 00:25:17.494 Removing: /var/run/dpdk/spdk_pid70984 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71051 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71054 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71100 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71110 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71150 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71155 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71293 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71323 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71406 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71732 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71744 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71775 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71789 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71804 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71823 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71837 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71852 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71871 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71885 00:25:17.494 Removing: /var/run/dpdk/spdk_pid71900 00:25:17.753 Removing: /var/run/dpdk/spdk_pid71919 00:25:17.753 Removing: /var/run/dpdk/spdk_pid71933 00:25:17.753 Removing: /var/run/dpdk/spdk_pid71943 00:25:17.753 Removing: /var/run/dpdk/spdk_pid71962 00:25:17.753 Removing: /var/run/dpdk/spdk_pid71975 00:25:17.753 Removing: /var/run/dpdk/spdk_pid71991 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72010 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72023 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72039 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72069 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72083 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72107 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72179 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72207 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72217 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72240 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72255 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72257 00:25:17.753 Removing: /var/run/dpdk/spdk_pid72298 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72313 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72336 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72351 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72355 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72359 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72374 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72378 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72387 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72397 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72420 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72452 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72456 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72484 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72494 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72496 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72542 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72548 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72580 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72582 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72598 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72600 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72602 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72615 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72617 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72626 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72701 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72743 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72850 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72884 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72923 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72943 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72960 00:25:17.754 Removing: /var/run/dpdk/spdk_pid72974 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73011 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73021 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73099 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73115 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73159 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73215 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73271 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73301 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73395 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73443 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73470 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73702 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73794 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73817 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73852 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73880 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73919 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73947 00:25:17.754 Removing: /var/run/dpdk/spdk_pid73984 00:25:17.754 Removing: /var/run/dpdk/spdk_pid74366 00:25:17.754 Removing: /var/run/dpdk/spdk_pid74406 00:25:17.754 Removing: /var/run/dpdk/spdk_pid74751 00:25:17.754 Removing: /var/run/dpdk/spdk_pid75210 00:25:17.754 Removing: /var/run/dpdk/spdk_pid75483 00:25:17.754 Removing: /var/run/dpdk/spdk_pid76319 00:25:17.754 Removing: /var/run/dpdk/spdk_pid77228 00:25:17.754 Removing: /var/run/dpdk/spdk_pid77345 00:25:18.013 Removing: /var/run/dpdk/spdk_pid77413 00:25:18.013 Removing: /var/run/dpdk/spdk_pid78828 00:25:18.013 Removing: /var/run/dpdk/spdk_pid79129 00:25:18.013 Removing: /var/run/dpdk/spdk_pid82832 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83192 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83302 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83429 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83450 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83471 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83492 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83577 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83707 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83838 00:25:18.013 Removing: /var/run/dpdk/spdk_pid83912 00:25:18.013 Removing: /var/run/dpdk/spdk_pid84099 00:25:18.013 Removing: /var/run/dpdk/spdk_pid84170 00:25:18.013 Removing: /var/run/dpdk/spdk_pid84250 00:25:18.013 Removing: /var/run/dpdk/spdk_pid84608 00:25:18.013 Removing: /var/run/dpdk/spdk_pid85017 00:25:18.013 Removing: /var/run/dpdk/spdk_pid85018 00:25:18.013 Removing: /var/run/dpdk/spdk_pid85019 00:25:18.013 Removing: /var/run/dpdk/spdk_pid85270 00:25:18.013 Removing: /var/run/dpdk/spdk_pid85512 00:25:18.013 Removing: /var/run/dpdk/spdk_pid85520 00:25:18.013 Removing: /var/run/dpdk/spdk_pid87881 00:25:18.013 Removing: /var/run/dpdk/spdk_pid87883 00:25:18.013 Removing: /var/run/dpdk/spdk_pid88209 00:25:18.013 Removing: /var/run/dpdk/spdk_pid88223 00:25:18.013 Removing: /var/run/dpdk/spdk_pid88237 00:25:18.013 Removing: /var/run/dpdk/spdk_pid88268 00:25:18.014 Removing: /var/run/dpdk/spdk_pid88273 00:25:18.014 Removing: /var/run/dpdk/spdk_pid88357 00:25:18.014 Removing: /var/run/dpdk/spdk_pid88365 00:25:18.014 Removing: /var/run/dpdk/spdk_pid88473 00:25:18.014 Removing: /var/run/dpdk/spdk_pid88475 00:25:18.014 Removing: /var/run/dpdk/spdk_pid88583 00:25:18.014 Removing: /var/run/dpdk/spdk_pid88585 00:25:18.014 Removing: /var/run/dpdk/spdk_pid89027 00:25:18.014 Removing: /var/run/dpdk/spdk_pid89070 00:25:18.014 Removing: /var/run/dpdk/spdk_pid89179 00:25:18.014 Removing: /var/run/dpdk/spdk_pid89258 00:25:18.014 Removing: /var/run/dpdk/spdk_pid89614 00:25:18.014 Removing: /var/run/dpdk/spdk_pid89799 00:25:18.014 Removing: /var/run/dpdk/spdk_pid90212 00:25:18.014 Removing: /var/run/dpdk/spdk_pid90756 00:25:18.014 Removing: /var/run/dpdk/spdk_pid91611 00:25:18.014 Removing: /var/run/dpdk/spdk_pid92241 00:25:18.014 Removing: /var/run/dpdk/spdk_pid92243 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94263 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94310 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94363 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94412 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94520 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94575 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94622 00:25:18.014 Removing: /var/run/dpdk/spdk_pid94676 00:25:18.014 Removing: /var/run/dpdk/spdk_pid95025 00:25:18.014 Removing: /var/run/dpdk/spdk_pid96230 00:25:18.014 Removing: /var/run/dpdk/spdk_pid96372 00:25:18.014 Removing: /var/run/dpdk/spdk_pid96615 00:25:18.014 Removing: /var/run/dpdk/spdk_pid97196 00:25:18.014 Removing: /var/run/dpdk/spdk_pid97356 00:25:18.014 Removing: /var/run/dpdk/spdk_pid97513 00:25:18.014 Removing: /var/run/dpdk/spdk_pid97604 00:25:18.014 Removing: /var/run/dpdk/spdk_pid97767 00:25:18.014 Removing: /var/run/dpdk/spdk_pid97876 00:25:18.014 Removing: /var/run/dpdk/spdk_pid98571 00:25:18.014 Removing: /var/run/dpdk/spdk_pid98612 00:25:18.014 Removing: /var/run/dpdk/spdk_pid98642 00:25:18.014 Removing: /var/run/dpdk/spdk_pid98891 00:25:18.014 Removing: /var/run/dpdk/spdk_pid98931 00:25:18.014 Removing: /var/run/dpdk/spdk_pid98963 00:25:18.014 Removing: /var/run/dpdk/spdk_pid99427 00:25:18.014 Removing: /var/run/dpdk/spdk_pid99432 00:25:18.014 Removing: /var/run/dpdk/spdk_pid99677 00:25:18.014 Removing: /var/run/dpdk/spdk_pid99800 00:25:18.014 Removing: /var/run/dpdk/spdk_pid99818 00:25:18.014 Clean 00:25:18.272 22:56:32 -- common/autotest_common.sh@1451 -- # return 0 00:25:18.272 22:56:32 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:18.272 22:56:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:18.272 22:56:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.272 22:56:32 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:18.272 22:56:32 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:18.272 22:56:32 -- common/autotest_common.sh@10 -- # set +x 00:25:18.272 22:56:32 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:18.272 22:56:32 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:18.272 22:56:32 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:18.272 22:56:32 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:18.272 22:56:32 -- spdk/autotest.sh@394 -- # hostname 00:25:18.272 22:56:32 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:18.530 geninfo: WARNING: invalid characters removed from testname! 00:25:40.475 22:56:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:43.007 22:56:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:45.550 22:57:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:48.109 22:57:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:50.641 22:57:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:53.171 22:57:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:55.708 22:57:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:55.708 22:57:09 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:25:55.708 22:57:09 -- common/autotest_common.sh@1681 -- $ lcov --version 00:25:55.708 22:57:09 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:25:55.708 22:57:10 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:25:55.708 22:57:10 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:25:55.708 22:57:10 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:25:55.708 22:57:10 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:25:55.708 22:57:10 -- scripts/common.sh@336 -- $ IFS=.-: 00:25:55.708 22:57:10 -- scripts/common.sh@336 -- $ read -ra ver1 00:25:55.708 22:57:10 -- scripts/common.sh@337 -- $ IFS=.-: 00:25:55.708 22:57:10 -- scripts/common.sh@337 -- $ read -ra ver2 00:25:55.708 22:57:10 -- scripts/common.sh@338 -- $ local 'op=<' 00:25:55.708 22:57:10 -- scripts/common.sh@340 -- $ ver1_l=2 00:25:55.708 22:57:10 -- scripts/common.sh@341 -- $ ver2_l=1 00:25:55.708 22:57:10 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:25:55.708 22:57:10 -- scripts/common.sh@344 -- $ case "$op" in 00:25:55.708 22:57:10 -- scripts/common.sh@345 -- $ : 1 00:25:55.708 22:57:10 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:25:55.708 22:57:10 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:55.708 22:57:10 -- scripts/common.sh@365 -- $ decimal 1 00:25:55.708 22:57:10 -- scripts/common.sh@353 -- $ local d=1 00:25:55.708 22:57:10 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:25:55.708 22:57:10 -- scripts/common.sh@355 -- $ echo 1 00:25:55.708 22:57:10 -- scripts/common.sh@365 -- $ ver1[v]=1 00:25:55.708 22:57:10 -- scripts/common.sh@366 -- $ decimal 2 00:25:55.708 22:57:10 -- scripts/common.sh@353 -- $ local d=2 00:25:55.708 22:57:10 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:25:55.708 22:57:10 -- scripts/common.sh@355 -- $ echo 2 00:25:55.708 22:57:10 -- scripts/common.sh@366 -- $ ver2[v]=2 00:25:55.708 22:57:10 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:25:55.708 22:57:10 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:25:55.708 22:57:10 -- scripts/common.sh@368 -- $ return 0 00:25:55.708 22:57:10 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:55.708 22:57:10 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:25:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.708 --rc genhtml_branch_coverage=1 00:25:55.708 --rc genhtml_function_coverage=1 00:25:55.708 --rc genhtml_legend=1 00:25:55.708 --rc geninfo_all_blocks=1 00:25:55.708 --rc geninfo_unexecuted_blocks=1 00:25:55.708 00:25:55.708 ' 00:25:55.708 22:57:10 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:25:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.708 --rc genhtml_branch_coverage=1 00:25:55.708 --rc genhtml_function_coverage=1 00:25:55.708 --rc genhtml_legend=1 00:25:55.708 --rc geninfo_all_blocks=1 00:25:55.708 --rc geninfo_unexecuted_blocks=1 00:25:55.708 00:25:55.708 ' 00:25:55.708 22:57:10 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:25:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.708 --rc genhtml_branch_coverage=1 00:25:55.708 --rc genhtml_function_coverage=1 00:25:55.708 --rc genhtml_legend=1 00:25:55.708 --rc geninfo_all_blocks=1 00:25:55.708 --rc geninfo_unexecuted_blocks=1 00:25:55.708 00:25:55.708 ' 00:25:55.708 22:57:10 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:25:55.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:55.708 --rc genhtml_branch_coverage=1 00:25:55.708 --rc genhtml_function_coverage=1 00:25:55.708 --rc genhtml_legend=1 00:25:55.708 --rc geninfo_all_blocks=1 00:25:55.708 --rc geninfo_unexecuted_blocks=1 00:25:55.708 00:25:55.708 ' 00:25:55.708 22:57:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:55.708 22:57:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:25:55.708 22:57:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:55.708 22:57:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.708 22:57:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.708 22:57:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.708 22:57:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.708 22:57:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.708 22:57:10 -- paths/export.sh@5 -- $ export PATH 00:25:55.708 22:57:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.708 22:57:10 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:25:55.708 22:57:10 -- common/autobuild_common.sh@479 -- $ date +%s 00:25:55.708 22:57:10 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1733612230.XXXXXX 00:25:55.708 22:57:10 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1733612230.uHAa7Y 00:25:55.708 22:57:10 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:25:55.708 22:57:10 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:25:55.708 22:57:10 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:25:55.708 22:57:10 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:25:55.708 22:57:10 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:25:55.708 22:57:10 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:25:55.708 22:57:10 -- common/autobuild_common.sh@495 -- $ get_config_params 00:25:55.708 22:57:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:25:55.708 22:57:10 -- common/autotest_common.sh@10 -- $ set +x 00:25:55.708 22:57:10 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:25:55.708 22:57:10 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:25:55.708 22:57:10 -- pm/common@17 -- $ local monitor 00:25:55.708 22:57:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:55.708 22:57:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:55.708 22:57:10 -- pm/common@25 -- $ sleep 1 00:25:55.708 22:57:10 -- pm/common@21 -- $ date +%s 00:25:55.708 22:57:10 -- pm/common@21 -- $ date +%s 00:25:55.708 22:57:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733612230 00:25:55.708 22:57:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1733612230 00:25:55.708 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733612230_collect-vmstat.pm.log 00:25:55.708 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1733612230_collect-cpu-load.pm.log 00:25:56.647 22:57:11 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:25:56.647 22:57:11 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:25:56.647 22:57:11 -- spdk/autopackage.sh@14 -- $ timing_finish 00:25:56.647 22:57:11 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:56.647 22:57:11 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:25:56.647 22:57:11 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:56.647 22:57:11 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:56.647 22:57:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:56.647 22:57:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:56.647 22:57:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:56.647 22:57:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:25:56.647 22:57:11 -- pm/common@44 -- $ pid=101578 00:25:56.647 22:57:11 -- pm/common@50 -- $ kill -TERM 101578 00:25:56.647 22:57:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:56.647 22:57:11 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:25:56.647 22:57:11 -- pm/common@44 -- $ pid=101579 00:25:56.647 22:57:11 -- pm/common@50 -- $ kill -TERM 101579 00:25:56.647 + [[ -n 5990 ]] 00:25:56.647 + sudo kill 5990 00:25:56.657 [Pipeline] } 00:25:56.673 [Pipeline] // timeout 00:25:56.679 [Pipeline] } 00:25:56.696 [Pipeline] // stage 00:25:56.702 [Pipeline] } 00:25:56.720 [Pipeline] // catchError 00:25:56.730 [Pipeline] stage 00:25:56.732 [Pipeline] { (Stop VM) 00:25:56.744 [Pipeline] sh 00:25:57.025 + vagrant halt 00:25:59.562 ==> default: Halting domain... 00:26:06.150 [Pipeline] sh 00:26:06.431 + vagrant destroy -f 00:26:09.721 ==> default: Removing domain... 00:26:09.734 [Pipeline] sh 00:26:10.017 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:10.026 [Pipeline] } 00:26:10.042 [Pipeline] // stage 00:26:10.047 [Pipeline] } 00:26:10.061 [Pipeline] // dir 00:26:10.066 [Pipeline] } 00:26:10.081 [Pipeline] // wrap 00:26:10.087 [Pipeline] } 00:26:10.100 [Pipeline] // catchError 00:26:10.110 [Pipeline] stage 00:26:10.112 [Pipeline] { (Epilogue) 00:26:10.126 [Pipeline] sh 00:26:10.408 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:15.685 [Pipeline] catchError 00:26:15.687 [Pipeline] { 00:26:15.699 [Pipeline] sh 00:26:15.978 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:16.238 Artifacts sizes are good 00:26:16.247 [Pipeline] } 00:26:16.262 [Pipeline] // catchError 00:26:16.273 [Pipeline] archiveArtifacts 00:26:16.281 Archiving artifacts 00:26:16.434 [Pipeline] cleanWs 00:26:16.449 [WS-CLEANUP] Deleting project workspace... 00:26:16.449 [WS-CLEANUP] Deferred wipeout is used... 00:26:16.478 [WS-CLEANUP] done 00:26:16.480 [Pipeline] } 00:26:16.495 [Pipeline] // stage 00:26:16.500 [Pipeline] } 00:26:16.514 [Pipeline] // node 00:26:16.519 [Pipeline] End of Pipeline 00:26:16.556 Finished: SUCCESS